Test Report: QEMU_macOS 19452

                    
                      667295c6870455ef3392c60a87bf7f5fdc211f00:2024-08-15:35803
                    
                

Test fail (94/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.57
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.21
46 TestCertOptions 10.24
47 TestCertExpiration 195.32
48 TestDockerFlags 10.09
49 TestForceSystemdFlag 10.1
50 TestForceSystemdEnv 11.25
95 TestFunctional/parallel/ServiceCmdConnect 30.74
167 TestMultiControlPlane/serial/StopSecondaryNode 312.29
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.14
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.17
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.56
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 227.34
177 TestImageBuild/serial/Setup 9.91
180 TestJSONOutput/start/Command 9.9
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.05
209 TestMinikubeProfile 10.23
212 TestMountStart/serial/StartWithMountFirst 10.12
215 TestMultiNode/serial/FreshStart2Nodes 9.94
216 TestMultiNode/serial/DeployApp2Nodes 71.67
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.08
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.08
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.14
223 TestMultiNode/serial/StartAfterStop 46.29
224 TestMultiNode/serial/RestartKeepsNodes 9.1
225 TestMultiNode/serial/DeleteNode 0.1
226 TestMultiNode/serial/StopMultiNode 3.69
227 TestMultiNode/serial/RestartMultiNode 5.26
228 TestMultiNode/serial/ValidateNameConflict 20.06
232 TestPreload 10
234 TestScheduledStopUnix 9.95
235 TestSkaffold 13.22
238 TestRunningBinaryUpgrade 589.7
240 TestKubernetesUpgrade 19
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.35
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.06
256 TestStoppedBinaryUpgrade/Upgrade 582.61
258 TestPause/serial/Start 9.92
268 TestNoKubernetes/serial/StartWithK8s 9.78
269 TestNoKubernetes/serial/StartWithStopK8s 5.26
270 TestNoKubernetes/serial/Start 5.3
274 TestNoKubernetes/serial/StartNoArgs 5.32
276 TestNetworkPlugins/group/auto/Start 9.87
277 TestNetworkPlugins/group/kindnet/Start 9.71
278 TestNetworkPlugins/group/flannel/Start 9.89
279 TestNetworkPlugins/group/enable-default-cni/Start 9.92
280 TestNetworkPlugins/group/bridge/Start 9.88
281 TestNetworkPlugins/group/kubenet/Start 9.8
282 TestNetworkPlugins/group/custom-flannel/Start 9.76
283 TestNetworkPlugins/group/calico/Start 9.99
284 TestNetworkPlugins/group/false/Start 10.17
286 TestStartStop/group/old-k8s-version/serial/FirstStart 9.88
287 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
288 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/old-k8s-version/serial/Pause 0.1
298 TestStartStop/group/no-preload/serial/FirstStart 9.93
299 TestStartStop/group/no-preload/serial/DeployApp 0.1
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
303 TestStartStop/group/no-preload/serial/SecondStart 5.24
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
307 TestStartStop/group/no-preload/serial/Pause 0.1
309 TestStartStop/group/embed-certs/serial/FirstStart 12.07
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.82
312 TestStartStop/group/embed-certs/serial/DeployApp 0.09
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/embed-certs/serial/SecondStart 6.42
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/embed-certs/serial/Pause 0.1
327 TestStartStop/group/newest-cni/serial/FirstStart 9.98
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
336 TestStartStop/group/newest-cni/serial/SecondStart 5.25
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.56509325s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f3a72015-b5bc-45d0-96cd-913df3b57201","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-953000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2deb54e9-1d95-4ba6-aa0c-febf43989628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19452"}}
	{"specversion":"1.0","id":"62dfa8a5-8d96-43bb-a6b2-c5faa80be142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig"}}
	{"specversion":"1.0","id":"81898df5-80f1-4a38-bc3e-4e2dcb62afbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"57693cff-0801-49a4-941c-f69ab31a1e62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b76ba2f8-29e9-46e6-ba56-b6e0ef336b98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube"}}
	{"specversion":"1.0","id":"2630a15a-25b3-4bca-bead-dbcdad0ec667","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"277e44b5-b141-463e-9b1b-70c0ec79cb32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c580466-a801-4c0d-a152-3fc3490c2c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2cd603ca-362f-4e7d-9e29-3fcfeb2defa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e09e2a4-3dff-4a6f-9624-9f3a414ff249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-953000\" primary control-plane node in \"download-only-953000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebd3f1de-4ea3-4dbd-82c3-c3de91a3f256","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7780cb13-9ece-40f0-9c5d-1985cc15abb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104907960 0x104907960 0x104907960 0x104907960 0x104907960 0x104907960 0x104907960] Decompressors:map[bz2:0x14000647770 gz:0x14000647778 tar:0x14000647700 tar.bz2:0x14000647710 tar.gz:0x14000647720 tar.xz:0x14000647730 tar.zst:0x14000647760 tbz2:0x14000647710 tgz:0x140
00647720 txz:0x14000647730 tzst:0x14000647760 xz:0x14000647780 zip:0x14000647790 zst:0x14000647788] Getters:map[file:0x14000768b40 http:0x14000a0a280 https:0x14000a0a2d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"5ce1524b-3a7a-4ae4-a978-6623ca2511e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:05:16.036591    1448 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:05:16.036727    1448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:16.036731    1448 out.go:358] Setting ErrFile to fd 2...
	I0815 16:05:16.036733    1448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:16.036847    1448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	W0815 16:05:16.036922    1448 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19452-964/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19452-964/.minikube/config/config.json: no such file or directory
	I0815 16:05:16.038195    1448 out.go:352] Setting JSON to true
	I0815 16:05:16.055569    1448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":285,"bootTime":1723762831,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:05:16.055657    1448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:05:16.060230    1448 out.go:97] [download-only-953000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:05:16.060416    1448 notify.go:220] Checking for updates...
	W0815 16:05:16.060425    1448 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 16:05:16.066088    1448 out.go:169] MINIKUBE_LOCATION=19452
	I0815 16:05:16.072126    1448 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:05:16.075038    1448 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:05:16.079097    1448 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:05:16.082109    1448 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	W0815 16:05:16.088173    1448 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 16:05:16.088432    1448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:05:16.093097    1448 out.go:97] Using the qemu2 driver based on user configuration
	I0815 16:05:16.093115    1448 start.go:297] selected driver: qemu2
	I0815 16:05:16.093118    1448 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:05:16.093188    1448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:05:16.095037    1448 out.go:169] Automatically selected the socket_vmnet network
	I0815 16:05:16.101644    1448 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0815 16:05:16.101725    1448 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:05:16.101821    1448 cni.go:84] Creating CNI manager for ""
	I0815 16:05:16.101841    1448 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 16:05:16.101890    1448 start.go:340] cluster config:
	{Name:download-only-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:05:16.107130    1448 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:05:16.112152    1448 out.go:97] Downloading VM boot image ...
	I0815 16:05:16.112176    1448 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0815 16:05:23.725797    1448 out.go:97] Starting "download-only-953000" primary control-plane node in "download-only-953000" cluster
	I0815 16:05:23.725823    1448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:23.786654    1448 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 16:05:23.786677    1448 cache.go:56] Caching tarball of preloaded images
	I0815 16:05:23.786863    1448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:23.791874    1448 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 16:05:23.791881    1448 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:23.886969    1448 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 16:05:29.295115    1448 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:29.295268    1448 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:29.990493    1448 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 16:05:29.990695    1448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/download-only-953000/config.json ...
	I0815 16:05:29.990713    1448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/download-only-953000/config.json: {Name:mkb837f547f5160dfe32538295c6ec5d3deaeaf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:05:29.990928    1448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:29.991130    1448 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0815 16:05:30.529206    1448 out.go:193] 
	W0815 16:05:30.534308    1448 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104907960 0x104907960 0x104907960 0x104907960 0x104907960 0x104907960 0x104907960] Decompressors:map[bz2:0x14000647770 gz:0x14000647778 tar:0x14000647700 tar.bz2:0x14000647710 tar.gz:0x14000647720 tar.xz:0x14000647730 tar.zst:0x14000647760 tbz2:0x14000647710 tgz:0x14000647720 txz:0x14000647730 tzst:0x14000647760 xz:0x14000647780 zip:0x14000647790 zst:0x14000647788] Getters:map[file:0x14000768b40 http:0x14000a0a280 https:0x14000a0a2d0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0815 16:05:30.534332    1448 out_reason.go:110] 
	W0815 16:05:30.543166    1448 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:05:30.546252    1448 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-953000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-790000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-790000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.052385084s)

                                                
                                                
-- stdout --
	* [offline-docker-790000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-790000" primary control-plane node in "offline-docker-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:50:51.557038    3716 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:50:51.557155    3716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:50:51.557161    3716 out.go:358] Setting ErrFile to fd 2...
	I0815 16:50:51.557163    3716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:50:51.557297    3716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:50:51.558444    3716 out.go:352] Setting JSON to false
	I0815 16:50:51.576351    3716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3019,"bootTime":1723762832,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:50:51.576470    3716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:50:51.581914    3716 out.go:177] * [offline-docker-790000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:50:51.587884    3716 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:50:51.587894    3716 notify.go:220] Checking for updates...
	I0815 16:50:51.593794    3716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:50:51.596870    3716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:50:51.599873    3716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:50:51.602764    3716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:50:51.605758    3716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:50:51.609136    3716 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:50:51.609190    3716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:50:51.611702    3716 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:50:51.618787    3716 start.go:297] selected driver: qemu2
	I0815 16:50:51.618796    3716 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:50:51.618802    3716 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:50:51.620810    3716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:50:51.622006    3716 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:50:51.624938    3716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:50:51.624972    3716 cni.go:84] Creating CNI manager for ""
	I0815 16:50:51.624998    3716 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:50:51.625005    3716 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:50:51.625044    3716 start.go:340] cluster config:
	{Name:offline-docker-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:50:51.628844    3716 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:51.635775    3716 out.go:177] * Starting "offline-docker-790000" primary control-plane node in "offline-docker-790000" cluster
	I0815 16:50:51.640838    3716 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:50:51.640867    3716 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:50:51.640875    3716 cache.go:56] Caching tarball of preloaded images
	I0815 16:50:51.640940    3716 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:50:51.640945    3716 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:50:51.641006    3716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/offline-docker-790000/config.json ...
	I0815 16:50:51.641016    3716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/offline-docker-790000/config.json: {Name:mk662843046d0e2af7fc482c213279d5198f0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:50:51.641297    3716 start.go:360] acquireMachinesLock for offline-docker-790000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:50:51.641329    3716 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "offline-docker-790000"
	I0815 16:50:51.641340    3716 start.go:93] Provisioning new machine with config: &{Name:offline-docker-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:50:51.641374    3716 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:50:51.649743    3716 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:50:51.665665    3716 start.go:159] libmachine.API.Create for "offline-docker-790000" (driver="qemu2")
	I0815 16:50:51.665702    3716 client.go:168] LocalClient.Create starting
	I0815 16:50:51.665784    3716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:50:51.665817    3716 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:51.665832    3716 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:51.665873    3716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:50:51.665896    3716 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:51.665904    3716 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:51.666313    3716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:50:51.816693    3716 main.go:141] libmachine: Creating SSH key...
	I0815 16:50:52.052214    3716 main.go:141] libmachine: Creating Disk image...
	I0815 16:50:52.052226    3716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:50:52.052429    3716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2
	I0815 16:50:52.062503    3716 main.go:141] libmachine: STDOUT: 
	I0815 16:50:52.062525    3716 main.go:141] libmachine: STDERR: 
	I0815 16:50:52.062574    3716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2 +20000M
	I0815 16:50:52.074142    3716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:50:52.074168    3716 main.go:141] libmachine: STDERR: 
	I0815 16:50:52.074185    3716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2
	I0815 16:50:52.074191    3716 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:50:52.074205    3716 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:50:52.074233    3716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:31:92:4e:14:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2
	I0815 16:50:52.076174    3716 main.go:141] libmachine: STDOUT: 
	I0815 16:50:52.076190    3716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:50:52.076207    3716 client.go:171] duration metric: took 410.496708ms to LocalClient.Create
	I0815 16:50:54.078374    3716 start.go:128] duration metric: took 2.436958083s to createHost
	I0815 16:50:54.078402    3716 start.go:83] releasing machines lock for "offline-docker-790000", held for 2.437041917s
	W0815 16:50:54.078417    3716 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:50:54.088247    3716 out.go:177] * Deleting "offline-docker-790000" in qemu2 ...
	W0815 16:50:54.099458    3716 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:50:54.099473    3716 start.go:729] Will try again in 5 seconds ...
	I0815 16:50:59.101770    3716 start.go:360] acquireMachinesLock for offline-docker-790000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:50:59.102220    3716 start.go:364] duration metric: took 352.958µs to acquireMachinesLock for "offline-docker-790000"
	I0815 16:50:59.102365    3716 start.go:93] Provisioning new machine with config: &{Name:offline-docker-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:50:59.102647    3716 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:50:59.111238    3716 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:50:59.160905    3716 start.go:159] libmachine.API.Create for "offline-docker-790000" (driver="qemu2")
	I0815 16:50:59.160956    3716 client.go:168] LocalClient.Create starting
	I0815 16:50:59.161094    3716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:50:59.161159    3716 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:59.161178    3716 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:59.161266    3716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:50:59.161312    3716 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:59.161337    3716 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:59.161826    3716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:50:59.324455    3716 main.go:141] libmachine: Creating SSH key...
	I0815 16:50:59.516269    3716 main.go:141] libmachine: Creating Disk image...
	I0815 16:50:59.516285    3716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:50:59.516498    3716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2
	I0815 16:50:59.526029    3716 main.go:141] libmachine: STDOUT: 
	I0815 16:50:59.526049    3716 main.go:141] libmachine: STDERR: 
	I0815 16:50:59.526090    3716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2 +20000M
	I0815 16:50:59.533987    3716 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:50:59.534013    3716 main.go:141] libmachine: STDERR: 
	I0815 16:50:59.534030    3716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2
	I0815 16:50:59.534036    3716 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:50:59.534043    3716 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:50:59.534077    3716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:7f:ac:f3:2e:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/offline-docker-790000/disk.qcow2
	I0815 16:50:59.535741    3716 main.go:141] libmachine: STDOUT: 
	I0815 16:50:59.535758    3716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:50:59.535772    3716 client.go:171] duration metric: took 374.806334ms to LocalClient.Create
	I0815 16:51:01.537998    3716 start.go:128] duration metric: took 2.435289083s to createHost
	I0815 16:51:01.538093    3716 start.go:83] releasing machines lock for "offline-docker-790000", held for 2.435821209s
	W0815 16:51:01.538554    3716 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:01.550200    3716 out.go:201] 
	W0815 16:51:01.554189    3716 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:51:01.554214    3716 out.go:270] * 
	* 
	W0815 16:51:01.557046    3716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:51:01.566193    3716 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-790000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-15 16:51:01.581089 -0700 PDT m=+2745.511959959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-790000 -n offline-docker-790000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-790000 -n offline-docker-790000: exit status 7 (72.811333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-790000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-790000
--- FAIL: TestOffline (10.21s)

                                                
                                    
x
+
TestCertOptions (10.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-617000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-617000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.983987666s)

                                                
                                                
-- stdout --
	* [cert-options-617000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-617000" primary control-plane node in "cert-options-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-617000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-617000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-617000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.657208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-617000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-617000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-617000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-617000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-617000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-617000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.780541ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-617000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-617000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-617000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-617000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-617000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-15 16:51:33.218732 -0700 PDT m=+2777.149254126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-617000 -n cert-options-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-617000 -n cert-options-617000: exit status 7 (30.253667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-617000
--- FAIL: TestCertOptions (10.24s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.965319791s)

                                                
                                                
-- stdout --
	* [cert-expiration-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-703000" primary control-plane node in "cert-expiration-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.202354208s)

                                                
                                                
-- stdout --
	* [cert-expiration-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-703000" primary control-plane node in "cert-expiration-703000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-703000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-703000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-703000" primary control-plane node in "cert-expiration-703000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-703000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-703000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-15 16:54:33.152078 -0700 PDT m=+2957.080620084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-703000 -n cert-expiration-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-703000 -n cert-expiration-703000: exit status 7 (65.328417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-703000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-703000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-672000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-672000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.860871042s)

                                                
                                                
-- stdout --
	* [docker-flags-672000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-672000" primary control-plane node in "docker-flags-672000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-672000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:51:13.015849    3909 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:51:13.015980    3909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:51:13.015984    3909 out.go:358] Setting ErrFile to fd 2...
	I0815 16:51:13.015986    3909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:51:13.016106    3909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:51:13.017244    3909 out.go:352] Setting JSON to false
	I0815 16:51:13.033505    3909 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3041,"bootTime":1723762832,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:51:13.033580    3909 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:51:13.039023    3909 out.go:177] * [docker-flags-672000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:51:13.046728    3909 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:51:13.046783    3909 notify.go:220] Checking for updates...
	I0815 16:51:13.054893    3909 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:51:13.056336    3909 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:51:13.059892    3909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:51:13.062841    3909 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:51:13.065910    3909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:51:13.069129    3909 config.go:182] Loaded profile config "force-systemd-flag-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:51:13.069195    3909 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:51:13.069246    3909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:51:13.073817    3909 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:51:13.080861    3909 start.go:297] selected driver: qemu2
	I0815 16:51:13.080867    3909 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:51:13.080875    3909 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:51:13.083132    3909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:51:13.085859    3909 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:51:13.093970    3909 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0815 16:51:13.094002    3909 cni.go:84] Creating CNI manager for ""
	I0815 16:51:13.094016    3909 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:51:13.094021    3909 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:51:13.094064    3909 start.go:340] cluster config:
	{Name:docker-flags-672000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-672000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:51:13.097795    3909 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:51:13.100861    3909 out.go:177] * Starting "docker-flags-672000" primary control-plane node in "docker-flags-672000" cluster
	I0815 16:51:13.104850    3909 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:51:13.104864    3909 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:51:13.104874    3909 cache.go:56] Caching tarball of preloaded images
	I0815 16:51:13.104939    3909 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:51:13.104945    3909 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:51:13.105009    3909 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/docker-flags-672000/config.json ...
	I0815 16:51:13.105020    3909 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/docker-flags-672000/config.json: {Name:mk9312fef0d8f47ad9414f184be219748df14962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:51:13.105318    3909 start.go:360] acquireMachinesLock for docker-flags-672000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:51:13.105354    3909 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "docker-flags-672000"
	I0815 16:51:13.105368    3909 start.go:93] Provisioning new machine with config: &{Name:docker-flags-672000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-672000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:51:13.105396    3909 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:51:13.113895    3909 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:51:13.132256    3909 start.go:159] libmachine.API.Create for "docker-flags-672000" (driver="qemu2")
	I0815 16:51:13.132288    3909 client.go:168] LocalClient.Create starting
	I0815 16:51:13.132351    3909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:51:13.132380    3909 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:13.132390    3909 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:13.132430    3909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:51:13.132454    3909 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:13.132460    3909 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:13.132857    3909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:51:13.284917    3909 main.go:141] libmachine: Creating SSH key...
	I0815 16:51:13.325360    3909 main.go:141] libmachine: Creating Disk image...
	I0815 16:51:13.325370    3909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:51:13.325584    3909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2
	I0815 16:51:13.334760    3909 main.go:141] libmachine: STDOUT: 
	I0815 16:51:13.334777    3909 main.go:141] libmachine: STDERR: 
	I0815 16:51:13.334824    3909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2 +20000M
	I0815 16:51:13.342735    3909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:51:13.342751    3909 main.go:141] libmachine: STDERR: 
	I0815 16:51:13.342764    3909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2
	I0815 16:51:13.342769    3909 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:51:13.342780    3909 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:51:13.342813    3909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:ba:6c:35:14:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2
	I0815 16:51:13.344477    3909 main.go:141] libmachine: STDOUT: 
	I0815 16:51:13.344492    3909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:51:13.344509    3909 client.go:171] duration metric: took 212.214167ms to LocalClient.Create
	I0815 16:51:15.346708    3909 start.go:128] duration metric: took 2.241266334s to createHost
	I0815 16:51:15.346765    3909 start.go:83] releasing machines lock for "docker-flags-672000", held for 2.241376667s
	W0815 16:51:15.346824    3909 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:15.358119    3909 out.go:177] * Deleting "docker-flags-672000" in qemu2 ...
	W0815 16:51:15.386067    3909 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:15.386100    3909 start.go:729] Will try again in 5 seconds ...
	I0815 16:51:20.388322    3909 start.go:360] acquireMachinesLock for docker-flags-672000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:51:20.427389    3909 start.go:364] duration metric: took 38.973416ms to acquireMachinesLock for "docker-flags-672000"
	I0815 16:51:20.427473    3909 start.go:93] Provisioning new machine with config: &{Name:docker-flags-672000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-672000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:51:20.427782    3909 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:51:20.442348    3909 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:51:20.493033    3909 start.go:159] libmachine.API.Create for "docker-flags-672000" (driver="qemu2")
	I0815 16:51:20.493084    3909 client.go:168] LocalClient.Create starting
	I0815 16:51:20.493214    3909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:51:20.493272    3909 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:20.493290    3909 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:20.493346    3909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:51:20.493392    3909 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:20.493406    3909 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:20.493992    3909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:51:20.656408    3909 main.go:141] libmachine: Creating SSH key...
	I0815 16:51:20.775773    3909 main.go:141] libmachine: Creating Disk image...
	I0815 16:51:20.775779    3909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:51:20.776178    3909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2
	I0815 16:51:20.785555    3909 main.go:141] libmachine: STDOUT: 
	I0815 16:51:20.785574    3909 main.go:141] libmachine: STDERR: 
	I0815 16:51:20.785627    3909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2 +20000M
	I0815 16:51:20.793483    3909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:51:20.793497    3909 main.go:141] libmachine: STDERR: 
	I0815 16:51:20.793515    3909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2
	I0815 16:51:20.793520    3909 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:51:20.793531    3909 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:51:20.793573    3909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:03:0c:40:da:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/docker-flags-672000/disk.qcow2
	I0815 16:51:20.795233    3909 main.go:141] libmachine: STDOUT: 
	I0815 16:51:20.795254    3909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:51:20.795267    3909 client.go:171] duration metric: took 302.17325ms to LocalClient.Create
	I0815 16:51:22.797473    3909 start.go:128] duration metric: took 2.36963425s to createHost
	I0815 16:51:22.797533    3909 start.go:83] releasing machines lock for "docker-flags-672000", held for 2.370086291s
	W0815 16:51:22.797985    3909 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-672000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-672000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:22.814809    3909 out.go:201] 
	W0815 16:51:22.821827    3909 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:51:22.821861    3909 out.go:270] * 
	* 
	W0815 16:51:22.824104    3909 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:51:22.834462    3909 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-672000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-672000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-672000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.568458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-672000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-672000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-672000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-672000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-672000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-672000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-672000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-672000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-672000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.653417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-672000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-672000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-672000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-672000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-672000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-672000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-15 16:51:22.975544 -0700 PDT m=+2766.906178876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-672000 -n docker-flags-672000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-672000 -n docker-flags-672000: exit status 7 (28.9995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-672000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-672000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-672000
--- FAIL: TestDockerFlags (10.09s)

                                                
                                    
x
+
TestForceSystemdFlag (10.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-246000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-246000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.911363833s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-246000" primary control-plane node in "force-systemd-flag-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:51:07.902266    3885 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:51:07.902396    3885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:51:07.902400    3885 out.go:358] Setting ErrFile to fd 2...
	I0815 16:51:07.902402    3885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:51:07.902527    3885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:51:07.903662    3885 out.go:352] Setting JSON to false
	I0815 16:51:07.919730    3885 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3035,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:51:07.919810    3885 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:51:07.925634    3885 out.go:177] * [force-systemd-flag-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:51:07.932643    3885 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:51:07.932687    3885 notify.go:220] Checking for updates...
	I0815 16:51:07.939570    3885 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:51:07.943585    3885 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:51:07.946639    3885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:51:07.949562    3885 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:51:07.952561    3885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:51:07.955912    3885 config.go:182] Loaded profile config "force-systemd-env-777000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:51:07.955986    3885 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:51:07.956034    3885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:51:07.960485    3885 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:51:07.967559    3885 start.go:297] selected driver: qemu2
	I0815 16:51:07.967566    3885 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:51:07.967572    3885 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:51:07.969818    3885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:51:07.973596    3885 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:51:07.976738    3885 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:51:07.976771    3885 cni.go:84] Creating CNI manager for ""
	I0815 16:51:07.976779    3885 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:51:07.976783    3885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:51:07.976815    3885 start.go:340] cluster config:
	{Name:force-systemd-flag-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:51:07.980475    3885 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:51:07.988594    3885 out.go:177] * Starting "force-systemd-flag-246000" primary control-plane node in "force-systemd-flag-246000" cluster
	I0815 16:51:07.992585    3885 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:51:07.992601    3885 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:51:07.992612    3885 cache.go:56] Caching tarball of preloaded images
	I0815 16:51:07.992673    3885 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:51:07.992678    3885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:51:07.992763    3885 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/force-systemd-flag-246000/config.json ...
	I0815 16:51:07.992774    3885 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/force-systemd-flag-246000/config.json: {Name:mk52c362d71f84c075ca85b75b80826f2b368c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:51:07.993079    3885 start.go:360] acquireMachinesLock for force-systemd-flag-246000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:51:07.993111    3885 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "force-systemd-flag-246000"
	I0815 16:51:07.993124    3885 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:51:07.993158    3885 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:51:07.997538    3885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:51:08.013888    3885 start.go:159] libmachine.API.Create for "force-systemd-flag-246000" (driver="qemu2")
	I0815 16:51:08.013922    3885 client.go:168] LocalClient.Create starting
	I0815 16:51:08.013981    3885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:51:08.014010    3885 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:08.014019    3885 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:08.014056    3885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:51:08.014077    3885 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:08.014086    3885 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:08.014518    3885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:51:08.167681    3885 main.go:141] libmachine: Creating SSH key...
	I0815 16:51:08.338066    3885 main.go:141] libmachine: Creating Disk image...
	I0815 16:51:08.338072    3885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:51:08.338302    3885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2
	I0815 16:51:08.347820    3885 main.go:141] libmachine: STDOUT: 
	I0815 16:51:08.347845    3885 main.go:141] libmachine: STDERR: 
	I0815 16:51:08.347909    3885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2 +20000M
	I0815 16:51:08.355897    3885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:51:08.355917    3885 main.go:141] libmachine: STDERR: 
	I0815 16:51:08.355940    3885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2
	I0815 16:51:08.355944    3885 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:51:08.355962    3885 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:51:08.355986    3885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d8:ac:fc:08:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2
	I0815 16:51:08.357641    3885 main.go:141] libmachine: STDOUT: 
	I0815 16:51:08.357660    3885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:51:08.357687    3885 client.go:171] duration metric: took 343.755417ms to LocalClient.Create
	I0815 16:51:10.359896    3885 start.go:128] duration metric: took 2.366687167s to createHost
	I0815 16:51:10.359991    3885 start.go:83] releasing machines lock for "force-systemd-flag-246000", held for 2.366839458s
	W0815 16:51:10.360122    3885 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:10.388312    3885 out.go:177] * Deleting "force-systemd-flag-246000" in qemu2 ...
	W0815 16:51:10.408931    3885 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:10.408953    3885 start.go:729] Will try again in 5 seconds ...
	I0815 16:51:15.411227    3885 start.go:360] acquireMachinesLock for force-systemd-flag-246000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:51:15.411612    3885 start.go:364] duration metric: took 293.417µs to acquireMachinesLock for "force-systemd-flag-246000"
	I0815 16:51:15.411721    3885 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:51:15.412013    3885 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:51:15.420373    3885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:51:15.467162    3885 start.go:159] libmachine.API.Create for "force-systemd-flag-246000" (driver="qemu2")
	I0815 16:51:15.467224    3885 client.go:168] LocalClient.Create starting
	I0815 16:51:15.467351    3885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:51:15.467418    3885 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:15.467436    3885 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:15.467507    3885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:51:15.467552    3885 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:15.467564    3885 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:15.468760    3885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:51:15.639819    3885 main.go:141] libmachine: Creating SSH key...
	I0815 16:51:15.720302    3885 main.go:141] libmachine: Creating Disk image...
	I0815 16:51:15.720307    3885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:51:15.720825    3885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2
	I0815 16:51:15.729997    3885 main.go:141] libmachine: STDOUT: 
	I0815 16:51:15.730015    3885 main.go:141] libmachine: STDERR: 
	I0815 16:51:15.730070    3885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2 +20000M
	I0815 16:51:15.737856    3885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:51:15.737871    3885 main.go:141] libmachine: STDERR: 
	I0815 16:51:15.737881    3885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2
	I0815 16:51:15.737885    3885 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:51:15.737908    3885 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:51:15.737938    3885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f3:bd:0e:79:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-flag-246000/disk.qcow2
	I0815 16:51:15.739479    3885 main.go:141] libmachine: STDOUT: 
	I0815 16:51:15.739495    3885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:51:15.739506    3885 client.go:171] duration metric: took 272.274709ms to LocalClient.Create
	I0815 16:51:17.741702    3885 start.go:128] duration metric: took 2.329637666s to createHost
	I0815 16:51:17.741749    3885 start.go:83] releasing machines lock for "force-systemd-flag-246000", held for 2.33008775s
	W0815 16:51:17.742084    3885 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:17.752709    3885 out.go:201] 
	W0815 16:51:17.760797    3885 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:51:17.760837    3885 out.go:270] * 
	* 
	W0815 16:51:17.763594    3885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:51:17.771750    3885 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-246000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-246000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-246000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.713834ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-246000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-246000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-246000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-15 16:51:17.865678 -0700 PDT m=+2761.796369584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-246000 -n force-systemd-flag-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-246000 -n force-systemd-flag-246000: exit status 7 (34.192125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-246000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-246000
--- FAIL: TestForceSystemdFlag (10.10s)

                                                
                                    
x
+
TestForceSystemdEnv (11.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-777000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-777000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.065303291s)

                                                
                                                
-- stdout --
	* [force-systemd-env-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-777000" primary control-plane node in "force-systemd-env-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:51:01.762862    3851 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:51:01.762982    3851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:51:01.762985    3851 out.go:358] Setting ErrFile to fd 2...
	I0815 16:51:01.762987    3851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:51:01.763109    3851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:51:01.764120    3851 out.go:352] Setting JSON to false
	I0815 16:51:01.780742    3851 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3029,"bootTime":1723762832,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:51:01.780817    3851 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:51:01.786080    3851 out.go:177] * [force-systemd-env-777000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:51:01.794036    3851 notify.go:220] Checking for updates...
	I0815 16:51:01.800026    3851 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:51:01.808993    3851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:51:01.816972    3851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:51:01.823937    3851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:51:01.828009    3851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:51:01.830997    3851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0815 16:51:01.834220    3851 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:51:01.834269    3851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:51:01.838981    3851 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:51:01.844946    3851 start.go:297] selected driver: qemu2
	I0815 16:51:01.844951    3851 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:51:01.844965    3851 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:51:01.847330    3851 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:51:01.851023    3851 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:51:01.854100    3851 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:51:01.854139    3851 cni.go:84] Creating CNI manager for ""
	I0815 16:51:01.854146    3851 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:51:01.854150    3851 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:51:01.854183    3851 start.go:340] cluster config:
	{Name:force-systemd-env-777000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:51:01.857971    3851 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:51:01.864959    3851 out.go:177] * Starting "force-systemd-env-777000" primary control-plane node in "force-systemd-env-777000" cluster
	I0815 16:51:01.869041    3851 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:51:01.869056    3851 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:51:01.869065    3851 cache.go:56] Caching tarball of preloaded images
	I0815 16:51:01.869123    3851 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:51:01.869130    3851 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:51:01.869183    3851 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/force-systemd-env-777000/config.json ...
	I0815 16:51:01.869194    3851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/force-systemd-env-777000/config.json: {Name:mk3326b712c3a14966461b7177378820d4da27d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:51:01.869396    3851 start.go:360] acquireMachinesLock for force-systemd-env-777000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:51:01.869433    3851 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "force-systemd-env-777000"
	I0815 16:51:01.869445    3851 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:51:01.869479    3851 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:51:01.878025    3851 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:51:01.895514    3851 start.go:159] libmachine.API.Create for "force-systemd-env-777000" (driver="qemu2")
	I0815 16:51:01.895541    3851 client.go:168] LocalClient.Create starting
	I0815 16:51:01.895605    3851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:51:01.895636    3851 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:01.895644    3851 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:01.895692    3851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:51:01.895714    3851 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:01.895722    3851 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:01.896058    3851 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:51:02.050549    3851 main.go:141] libmachine: Creating SSH key...
	I0815 16:51:02.176044    3851 main.go:141] libmachine: Creating Disk image...
	I0815 16:51:02.176051    3851 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:51:02.176277    3851 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I0815 16:51:02.185969    3851 main.go:141] libmachine: STDOUT: 
	I0815 16:51:02.185989    3851 main.go:141] libmachine: STDERR: 
	I0815 16:51:02.186042    3851 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2 +20000M
	I0815 16:51:02.194166    3851 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:51:02.194180    3851 main.go:141] libmachine: STDERR: 
	I0815 16:51:02.194198    3851 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I0815 16:51:02.194203    3851 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:51:02.194218    3851 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:51:02.194242    3851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9e:31:2d:9e:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I0815 16:51:02.195873    3851 main.go:141] libmachine: STDOUT: 
	I0815 16:51:02.195890    3851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:51:02.195909    3851 client.go:171] duration metric: took 300.358959ms to LocalClient.Create
	I0815 16:51:04.198012    3851 start.go:128] duration metric: took 2.328498875s to createHost
	I0815 16:51:04.198037    3851 start.go:83] releasing machines lock for "force-systemd-env-777000", held for 2.328574166s
	W0815 16:51:04.198056    3851 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:04.206647    3851 out.go:177] * Deleting "force-systemd-env-777000" in qemu2 ...
	W0815 16:51:04.216225    3851 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:04.216236    3851 start.go:729] Will try again in 5 seconds ...
	I0815 16:51:09.218634    3851 start.go:360] acquireMachinesLock for force-systemd-env-777000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:51:10.360221    3851 start.go:364] duration metric: took 1.141461833s to acquireMachinesLock for "force-systemd-env-777000"
	I0815 16:51:10.360331    3851 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:51:10.360543    3851 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:51:10.376269    3851 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:51:10.429191    3851 start.go:159] libmachine.API.Create for "force-systemd-env-777000" (driver="qemu2")
	I0815 16:51:10.429252    3851 client.go:168] LocalClient.Create starting
	I0815 16:51:10.429380    3851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:51:10.429449    3851 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:10.429469    3851 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:10.429523    3851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:51:10.429567    3851 main.go:141] libmachine: Decoding PEM data...
	I0815 16:51:10.429581    3851 main.go:141] libmachine: Parsing certificate...
	I0815 16:51:10.430140    3851 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:51:10.592682    3851 main.go:141] libmachine: Creating SSH key...
	I0815 16:51:10.733238    3851 main.go:141] libmachine: Creating Disk image...
	I0815 16:51:10.733245    3851 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:51:10.734007    3851 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I0815 16:51:10.743291    3851 main.go:141] libmachine: STDOUT: 
	I0815 16:51:10.743310    3851 main.go:141] libmachine: STDERR: 
	I0815 16:51:10.743378    3851 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2 +20000M
	I0815 16:51:10.751365    3851 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:51:10.751378    3851 main.go:141] libmachine: STDERR: 
	I0815 16:51:10.751388    3851 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I0815 16:51:10.751392    3851 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:51:10.751416    3851 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:51:10.751437    3851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:23:49:76:32:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I0815 16:51:10.753050    3851 main.go:141] libmachine: STDOUT: 
	I0815 16:51:10.753063    3851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:51:10.753074    3851 client.go:171] duration metric: took 323.813459ms to LocalClient.Create
	I0815 16:51:12.755376    3851 start.go:128] duration metric: took 2.394775375s to createHost
	I0815 16:51:12.755420    3851 start.go:83] releasing machines lock for "force-systemd-env-777000", held for 2.395131833s
	W0815 16:51:12.755795    3851 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:51:12.767442    3851 out.go:201] 
	W0815 16:51:12.772409    3851 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:51:12.772478    3851 out.go:270] * 
	* 
	W0815 16:51:12.775116    3851 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:51:12.784498    3851 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-777000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-777000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-777000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.391542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-777000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-777000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-777000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-15 16:51:12.878908 -0700 PDT m=+2756.809654417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-777000 -n force-systemd-env-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-777000 -n force-systemd-env-777000: exit status 7 (34.201583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-777000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-777000
--- FAIL: TestForceSystemdEnv (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-899000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-899000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-4jl4d" [8c2e5f18-a7be-4a96-b5ec-5279201fa301] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-4jl4d" [8c2e5f18-a7be-4a96-b5ec-5279201fa301] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003850459s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31603
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31603: Get "http://192.168.105.4:31603": dial tcp 192.168.105.4:31603: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-899000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-4jl4d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-899000/192.168.105.4
Start Time:       Thu, 15 Aug 2024 16:16:07 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://b1e960c03ce23a48b257f1dcb01e0b0cd28e996604237b525eab0d2355a9c7bc
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 15 Aug 2024 16:16:23 -0700
Finished:     Thu, 15 Aug 2024 16:16:23 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scglj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-scglj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  29s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-4jl4d to functional-899000
Normal   Pulled     13s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    13s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    1s (x3 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-4jl4d_default(8c2e5f18-a7be-4a96-b5ec-5279201fa301)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-899000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-899000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.7.169
IPs:                      10.101.7.169
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31603/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-899000 -n functional-899000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-899000 image ls                                                                                           | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT | 15 Aug 24 16:15 PDT |
	| image   | functional-899000 image save --daemon                                                                                | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT | 15 Aug 24 16:15 PDT |
	|         | kicbase/echo-server:functional-899000                                                                                |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh echo                                                                                           | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT | 15 Aug 24 16:15 PDT |
	|         | hello                                                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh cat                                                                                            | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT | 15 Aug 24 16:15 PDT |
	|         | /etc/hostname                                                                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-899000 tunnel                                                                                             | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-899000 tunnel                                                                                             | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-899000 tunnel                                                                                             | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:15 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| service | functional-899000 service list                                                                                       | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	| service | functional-899000 service list                                                                                       | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-899000 service                                                                                            | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-899000                                                                                                    | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-899000 service                                                                                            | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| addons  | functional-899000 addons list                                                                                        | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	| addons  | functional-899000 addons list                                                                                        | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-899000 service                                                                                            | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| mount   | -p functional-899000                                                                                                 | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port172792078/001:/mount-9p       |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh findmnt                                                                                        | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh findmnt                                                                                        | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh -- ls                                                                                          | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh cat                                                                                            | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | /mount-9p/test-1723763791356602000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh stat                                                                                           | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh stat                                                                                           | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh sudo                                                                                           | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT | 15 Aug 24 16:16 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-899000 ssh findmnt                                                                                        | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-899000                                                                                                 | functional-899000 | jenkins | v1.33.1 | 15 Aug 24 16:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1556637293/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:15:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:15:07.031945    1970 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:15:07.032125    1970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:15:07.032128    1970 out.go:358] Setting ErrFile to fd 2...
	I0815 16:15:07.032130    1970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:15:07.032314    1970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:15:07.033787    1970 out.go:352] Setting JSON to false
	I0815 16:15:07.055634    1970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":876,"bootTime":1723762831,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:15:07.055705    1970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:15:07.060813    1970 out.go:177] * [functional-899000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:15:07.069741    1970 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:15:07.069795    1970 notify.go:220] Checking for updates...
	I0815 16:15:07.077695    1970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:15:07.081638    1970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:15:07.084738    1970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:15:07.087673    1970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:15:07.090750    1970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:15:07.093917    1970 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:15:07.093974    1970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:15:07.097686    1970 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:15:07.104630    1970 start.go:297] selected driver: qemu2
	I0815 16:15:07.104634    1970 start.go:901] validating driver "qemu2" against &{Name:functional-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:15:07.104689    1970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:15:07.107322    1970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:15:07.107367    1970 cni.go:84] Creating CNI manager for ""
	I0815 16:15:07.107379    1970 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:15:07.107427    1970 start.go:340] cluster config:
	{Name:functional-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-899000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:15:07.111227    1970 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:15:07.118511    1970 out.go:177] * Starting "functional-899000" primary control-plane node in "functional-899000" cluster
	I0815 16:15:07.122626    1970 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:15:07.122637    1970 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:15:07.122643    1970 cache.go:56] Caching tarball of preloaded images
	I0815 16:15:07.122693    1970 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:15:07.122696    1970 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:15:07.122741    1970 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/config.json ...
	I0815 16:15:07.123181    1970 start.go:360] acquireMachinesLock for functional-899000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:15:07.123217    1970 start.go:364] duration metric: took 32.416µs to acquireMachinesLock for "functional-899000"
	I0815 16:15:07.123225    1970 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:15:07.123229    1970 fix.go:54] fixHost starting: 
	I0815 16:15:07.123787    1970 fix.go:112] recreateIfNeeded on functional-899000: state=Running err=<nil>
	W0815 16:15:07.123793    1970 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:15:07.128557    1970 out.go:177] * Updating the running qemu2 "functional-899000" VM ...
	I0815 16:15:07.136698    1970 machine.go:93] provisionDockerMachine start ...
	I0815 16:15:07.136725    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.136842    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.136845    1970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:15:07.193185    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-899000
	
	I0815 16:15:07.193194    1970 buildroot.go:166] provisioning hostname "functional-899000"
	I0815 16:15:07.193226    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.193323    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.193326    1970 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-899000 && echo "functional-899000" | sudo tee /etc/hostname
	I0815 16:15:07.251283    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-899000
	
	I0815 16:15:07.251323    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.251431    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.251437    1970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-899000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-899000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-899000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:15:07.305605    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:15:07.305615    1970 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-964/.minikube}
	I0815 16:15:07.305621    1970 buildroot.go:174] setting up certificates
	I0815 16:15:07.305627    1970 provision.go:84] configureAuth start
	I0815 16:15:07.305631    1970 provision.go:143] copyHostCerts
	I0815 16:15:07.305710    1970 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem, removing ...
	I0815 16:15:07.305714    1970 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem
	I0815 16:15:07.306058    1970 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem (1082 bytes)
	I0815 16:15:07.306236    1970 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem, removing ...
	I0815 16:15:07.306238    1970 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem
	I0815 16:15:07.306296    1970 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem (1123 bytes)
	I0815 16:15:07.306407    1970 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem, removing ...
	I0815 16:15:07.306409    1970 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem
	I0815 16:15:07.306457    1970 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem (1679 bytes)
	I0815 16:15:07.306544    1970 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem org=jenkins.functional-899000 san=[127.0.0.1 192.168.105.4 functional-899000 localhost minikube]
	I0815 16:15:07.470160    1970 provision.go:177] copyRemoteCerts
	I0815 16:15:07.470202    1970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:15:07.470210    1970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
	I0815 16:15:07.499990    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:15:07.507919    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 16:15:07.516452    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:15:07.524467    1970 provision.go:87] duration metric: took 218.843ms to configureAuth
	I0815 16:15:07.524472    1970 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:15:07.524571    1970 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:15:07.524606    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.524684    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.524687    1970 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:15:07.582444    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:15:07.582449    1970 buildroot.go:70] root file system type: tmpfs
	I0815 16:15:07.582512    1970 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:15:07.582580    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.582687    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.582718    1970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:15:07.641443    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:15:07.641493    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.641620    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.641626    1970 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:15:07.704795    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:15:07.704805    1970 machine.go:96] duration metric: took 568.12025ms to provisionDockerMachine
	I0815 16:15:07.704810    1970 start.go:293] postStartSetup for "functional-899000" (driver="qemu2")
	I0815 16:15:07.704815    1970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:15:07.704879    1970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:15:07.704886    1970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
	I0815 16:15:07.734597    1970 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:15:07.736136    1970 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:15:07.736140    1970 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/addons for local assets ...
	I0815 16:15:07.736228    1970 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/files for local assets ...
	I0815 16:15:07.736340    1970 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem -> 14462.pem in /etc/ssl/certs
	I0815 16:15:07.736447    1970 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/test/nested/copy/1446/hosts -> hosts in /etc/test/nested/copy/1446
	I0815 16:15:07.736481    1970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1446
	I0815 16:15:07.739637    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:15:07.747872    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/test/nested/copy/1446/hosts --> /etc/test/nested/copy/1446/hosts (40 bytes)
	I0815 16:15:07.756309    1970 start.go:296] duration metric: took 51.496833ms for postStartSetup
	I0815 16:15:07.756320    1970 fix.go:56] duration metric: took 633.111958ms for fixHost
	I0815 16:15:07.756351    1970 main.go:141] libmachine: Using SSH client type: native
	I0815 16:15:07.756447    1970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031085a0] 0x10310ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0815 16:15:07.756450    1970 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:15:07.813008    1970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723763708.004404130
	
	I0815 16:15:07.813013    1970 fix.go:216] guest clock: 1723763708.004404130
	I0815 16:15:07.813017    1970 fix.go:229] Guest: 2024-08-15 16:15:08.00440413 -0700 PDT Remote: 2024-08-15 16:15:07.756321 -0700 PDT m=+0.761390709 (delta=248.08313ms)
	I0815 16:15:07.813026    1970 fix.go:200] guest clock delta is within tolerance: 248.08313ms
	I0815 16:15:07.813028    1970 start.go:83] releasing machines lock for "functional-899000", held for 689.82925ms
	I0815 16:15:07.813319    1970 ssh_runner.go:195] Run: cat /version.json
	I0815 16:15:07.813321    1970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:15:07.813325    1970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
	I0815 16:15:07.813335    1970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
	I0815 16:15:07.881143    1970 ssh_runner.go:195] Run: systemctl --version
	I0815 16:15:07.883220    1970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:15:07.885171    1970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:15:07.885193    1970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:15:07.888651    1970 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 16:15:07.888658    1970 start.go:495] detecting cgroup driver to use...
	I0815 16:15:07.888715    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:15:07.895075    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:15:07.899105    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:15:07.903078    1970 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:15:07.903106    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:15:07.907103    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:15:07.910925    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:15:07.915030    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:15:07.919048    1970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:15:07.922869    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:15:07.926805    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:15:07.938344    1970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:15:07.942636    1970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:15:07.945990    1970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:15:07.949714    1970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:15:08.044657    1970 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:15:08.052090    1970 start.go:495] detecting cgroup driver to use...
	I0815 16:15:08.052136    1970 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:15:08.058602    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:15:08.064063    1970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:15:08.071152    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:15:08.076685    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:15:08.082073    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:15:08.088585    1970 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:15:08.090084    1970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:15:08.093984    1970 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:15:08.099746    1970 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:15:08.190581    1970 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:15:08.297167    1970 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:15:08.297229    1970 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:15:08.304596    1970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:15:08.393427    1970 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:15:20.717436    1970 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.32436775s)
	I0815 16:15:20.717506    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:15:20.723461    1970 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:15:20.731131    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:15:20.736830    1970 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:15:20.824385    1970 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:15:20.911212    1970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:15:21.003576    1970 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:15:21.010251    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:15:21.015902    1970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:15:21.104473    1970 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:15:21.133246    1970 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:15:21.133326    1970 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:15:21.136500    1970 start.go:563] Will wait 60s for crictl version
	I0815 16:15:21.136539    1970 ssh_runner.go:195] Run: which crictl
	I0815 16:15:21.138009    1970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:15:21.149467    1970 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:15:21.149546    1970 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:15:21.156839    1970 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:15:21.172152    1970 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:15:21.172284    1970 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0815 16:15:21.178093    1970 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0815 16:15:21.182119    1970 kubeadm.go:883] updating cluster {Name:functional-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:functional-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:15:21.182193    1970 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:15:21.182250    1970 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:15:21.188090    1970 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-899000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0815 16:15:21.188095    1970 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:15:21.188141    1970 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:15:21.193581    1970 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-899000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0815 16:15:21.193587    1970 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:15:21.193593    1970 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.0 docker true true} ...
	I0815 16:15:21.193658    1970 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-899000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:15:21.193711    1970 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:15:21.209694    1970 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0815 16:15:21.209740    1970 cni.go:84] Creating CNI manager for ""
	I0815 16:15:21.209747    1970 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:15:21.209753    1970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:15:21.209765    1970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-899000 NodeName:functional-899000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:15:21.209820    1970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-899000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:15:21.209873    1970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:15:21.213628    1970 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:15:21.213654    1970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 16:15:21.217170    1970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 16:15:21.223137    1970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:15:21.228859    1970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0815 16:15:21.234833    1970 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0815 16:15:21.236208    1970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:15:21.324964    1970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:15:21.331217    1970 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000 for IP: 192.168.105.4
	I0815 16:15:21.331221    1970 certs.go:194] generating shared ca certs ...
	I0815 16:15:21.331228    1970 certs.go:226] acquiring lock for ca certs: {Name:mk1fa67494d9857cf8e0d98ec65576a15d2cd3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:15:21.331399    1970 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key
	I0815 16:15:21.331459    1970 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key
	I0815 16:15:21.331462    1970 certs.go:256] generating profile certs ...
	I0815 16:15:21.331526    1970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.key
	I0815 16:15:21.331578    1970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/apiserver.key.52a80620
	I0815 16:15:21.331625    1970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/proxy-client.key
	I0815 16:15:21.331763    1970 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem (1338 bytes)
	W0815 16:15:21.331789    1970 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446_empty.pem, impossibly tiny 0 bytes
	I0815 16:15:21.331794    1970 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 16:15:21.331811    1970 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:15:21.331828    1970 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:15:21.331843    1970 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem (1679 bytes)
	I0815 16:15:21.331881    1970 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:15:21.332235    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:15:21.340919    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 16:15:21.349150    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:15:21.357516    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 16:15:21.365831    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 16:15:21.374261    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:15:21.382275    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:15:21.390423    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:15:21.398746    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /usr/share/ca-certificates/14462.pem (1708 bytes)
	I0815 16:15:21.406826    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:15:21.414679    1970 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem --> /usr/share/ca-certificates/1446.pem (1338 bytes)
	I0815 16:15:21.422905    1970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:15:21.429176    1970 ssh_runner.go:195] Run: openssl version
	I0815 16:15:21.431430    1970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14462.pem && ln -fs /usr/share/ca-certificates/14462.pem /etc/ssl/certs/14462.pem"
	I0815 16:15:21.435109    1970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14462.pem
	I0815 16:15:21.436686    1970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:13 /usr/share/ca-certificates/14462.pem
	I0815 16:15:21.436702    1970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14462.pem
	I0815 16:15:21.438842    1970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14462.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:15:21.442052    1970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:15:21.445785    1970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:15:21.447388    1970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:15:21.447404    1970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:15:21.449449    1970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:15:21.453128    1970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1446.pem && ln -fs /usr/share/ca-certificates/1446.pem /etc/ssl/certs/1446.pem"
	I0815 16:15:21.457105    1970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1446.pem
	I0815 16:15:21.458974    1970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:13 /usr/share/ca-certificates/1446.pem
	I0815 16:15:21.458994    1970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1446.pem
	I0815 16:15:21.460989    1970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1446.pem /etc/ssl/certs/51391683.0"
	I0815 16:15:21.464817    1970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:15:21.466400    1970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:15:21.468749    1970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:15:21.470755    1970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:15:21.472713    1970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:15:21.474725    1970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:15:21.476718    1970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:15:21.478704    1970 kubeadm.go:392] StartCluster: {Name:functional-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:functional-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:15:21.478772    1970 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:15:21.484729    1970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:15:21.488726    1970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:15:21.488729    1970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:15:21.488747    1970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:15:21.492239    1970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:15:21.492542    1970 kubeconfig.go:125] found "functional-899000" server: "https://192.168.105.4:8441"
	I0815 16:15:21.493178    1970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:15:21.496871    1970 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0815 16:15:21.496875    1970 kubeadm.go:1160] stopping kube-system containers ...
	I0815 16:15:21.496920    1970 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:15:21.504123    1970 docker.go:483] Stopping containers: [0577f056b97f 508a8f9313ab a41cb65535ec 0afa47f66afa e10ed88101bb 6584f0af561f 2768e255f699 a14ceac28780 f83a96362b68 51c7538dde1f f12780093650 94c60bf3cf2f 772866a871da 962b9339369e fff0209d2f86 90b0be36ac02 bf4b68def9d9 3438fc891210 c0184d32d187 cd078f573a8c 532c3017ce19 ec5442a78577 98eff388540d 64d43d6529f1 caf0662bc3a1 6ddd7896a374 4dd99ea1ec86 cd855ecd4d1b]
	I0815 16:15:21.504186    1970 ssh_runner.go:195] Run: docker stop 0577f056b97f 508a8f9313ab a41cb65535ec 0afa47f66afa e10ed88101bb 6584f0af561f 2768e255f699 a14ceac28780 f83a96362b68 51c7538dde1f f12780093650 94c60bf3cf2f 772866a871da 962b9339369e fff0209d2f86 90b0be36ac02 bf4b68def9d9 3438fc891210 c0184d32d187 cd078f573a8c 532c3017ce19 ec5442a78577 98eff388540d 64d43d6529f1 caf0662bc3a1 6ddd7896a374 4dd99ea1ec86 cd855ecd4d1b
	I0815 16:15:21.512306    1970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 16:15:21.632665    1970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 16:15:21.638832    1970 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 15 23:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug 15 23:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 15 23:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Aug 15 23:14 /etc/kubernetes/scheduler.conf
	
	I0815 16:15:21.638875    1970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0815 16:15:21.643814    1970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0815 16:15:21.648904    1970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0815 16:15:21.653497    1970 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:15:21.653520    1970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 16:15:21.657776    1970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0815 16:15:21.661697    1970 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:15:21.661733    1970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 16:15:21.665525    1970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 16:15:21.669409    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:15:21.686937    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:15:22.364620    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:15:22.497092    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:15:22.527361    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:15:22.563188    1970 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:15:22.563257    1970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:15:23.065338    1970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:15:23.565283    1970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:15:23.570745    1970 api_server.go:72] duration metric: took 1.007588542s to wait for apiserver process to appear ...
	I0815 16:15:23.570751    1970 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:15:23.570764    1970 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0815 16:15:25.704341    1970 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 16:15:25.704351    1970 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 16:15:25.704356    1970 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0815 16:15:25.710914    1970 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 16:15:25.710920    1970 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 16:15:26.072779    1970 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0815 16:15:26.079006    1970 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 16:15:26.079018    1970 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 16:15:26.572735    1970 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0815 16:15:26.578257    1970 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 16:15:26.578266    1970 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 16:15:27.072715    1970 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0815 16:15:27.075380    1970 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0815 16:15:27.079134    1970 api_server.go:141] control plane version: v1.31.0
	I0815 16:15:27.079140    1970 api_server.go:131] duration metric: took 3.508492458s to wait for apiserver health ...
	I0815 16:15:27.079144    1970 cni.go:84] Creating CNI manager for ""
	I0815 16:15:27.079150    1970 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:15:27.100762    1970 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 16:15:27.106724    1970 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 16:15:27.110477    1970 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 16:15:27.116144    1970 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:15:27.120715    1970 system_pods.go:59] 7 kube-system pods found
	I0815 16:15:27.120723    1970 system_pods.go:61] "coredns-6f6b679f8f-zv57l" [e0262857-75b4-47b3-944a-88efcc326fb6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:15:27.120726    1970 system_pods.go:61] "etcd-functional-899000" [08931bf1-1aa4-4ec8-83dc-addd666b5a24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 16:15:27.120729    1970 system_pods.go:61] "kube-apiserver-functional-899000" [e9cab163-346d-4d1b-af9e-9ede4e85cde0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 16:15:27.120731    1970 system_pods.go:61] "kube-controller-manager-functional-899000" [9558470f-318f-4817-98db-508471fedeb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 16:15:27.120733    1970 system_pods.go:61] "kube-proxy-rd5pz" [21c080aa-cdff-41c6-b427-1518b9b2af2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 16:15:27.120735    1970 system_pods.go:61] "kube-scheduler-functional-899000" [82678a0a-0d8f-4c01-b5e7-f41b4f42fbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 16:15:27.120737    1970 system_pods.go:61] "storage-provisioner" [7795fb4f-b5b9-4231-bddb-fe511c29f7aa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 16:15:27.120739    1970 system_pods.go:74] duration metric: took 4.5925ms to wait for pod list to return data ...
	I0815 16:15:27.120742    1970 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:15:27.122151    1970 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:15:27.122156    1970 node_conditions.go:123] node cpu capacity is 2
	I0815 16:15:27.122161    1970 node_conditions.go:105] duration metric: took 1.417125ms to run NodePressure ...
	I0815 16:15:27.122167    1970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:15:27.344412    1970 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 16:15:27.348256    1970 kubeadm.go:739] kubelet initialised
	I0815 16:15:27.348263    1970 kubeadm.go:740] duration metric: took 3.841042ms waiting for restarted kubelet to initialise ...
	I0815 16:15:27.348269    1970 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:15:27.352381    1970 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:29.358023    1970 pod_ready.go:103] pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace has status "Ready":"False"
	I0815 16:15:31.358248    1970 pod_ready.go:103] pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace has status "Ready":"False"
	I0815 16:15:33.368431    1970 pod_ready.go:103] pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace has status "Ready":"False"
	I0815 16:15:35.863586    1970 pod_ready.go:103] pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace has status "Ready":"False"
	I0815 16:15:36.857667    1970 pod_ready.go:93] pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:36.857675    1970 pod_ready.go:82] duration metric: took 9.505574875s for pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:36.857681    1970 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:36.860353    1970 pod_ready.go:93] pod "etcd-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:36.860357    1970 pod_ready.go:82] duration metric: took 2.673209ms for pod "etcd-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:36.860361    1970 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:38.870666    1970 pod_ready.go:103] pod "kube-apiserver-functional-899000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:15:40.873637    1970 pod_ready.go:103] pod "kube-apiserver-functional-899000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:15:42.368133    1970 pod_ready.go:93] pod "kube-apiserver-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:42.368146    1970 pod_ready.go:82] duration metric: took 5.507945166s for pod "kube-apiserver-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.368157    1970 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.372522    1970 pod_ready.go:93] pod "kube-controller-manager-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:42.372531    1970 pod_ready.go:82] duration metric: took 4.367916ms for pod "kube-controller-manager-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.372538    1970 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rd5pz" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.376693    1970 pod_ready.go:93] pod "kube-proxy-rd5pz" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:42.376700    1970 pod_ready.go:82] duration metric: took 4.156792ms for pod "kube-proxy-rd5pz" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.376707    1970 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.380301    1970 pod_ready.go:93] pod "kube-scheduler-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:42.380306    1970 pod_ready.go:82] duration metric: took 3.594125ms for pod "kube-scheduler-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.380313    1970 pod_ready.go:39] duration metric: took 15.032491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:15:42.380330    1970 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 16:15:42.388682    1970 ops.go:34] apiserver oom_adj: -16
	I0815 16:15:42.388691    1970 kubeadm.go:597] duration metric: took 20.9005875s to restartPrimaryControlPlane
	I0815 16:15:42.388695    1970 kubeadm.go:394] duration metric: took 20.910623125s to StartCluster
	I0815 16:15:42.388710    1970 settings.go:142] acquiring lock: {Name:mk3ef55eecb064d007fbd1b55ea891b5b51acd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:15:42.388889    1970 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:15:42.389487    1970 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:15:42.389865    1970 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:15:42.389878    1970 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:15:42.389935    1970 addons.go:69] Setting storage-provisioner=true in profile "functional-899000"
	I0815 16:15:42.389954    1970 addons.go:234] Setting addon storage-provisioner=true in "functional-899000"
	W0815 16:15:42.389957    1970 addons.go:243] addon storage-provisioner should already be in state true
	I0815 16:15:42.389961    1970 addons.go:69] Setting default-storageclass=true in profile "functional-899000"
	I0815 16:15:42.389975    1970 host.go:66] Checking if "functional-899000" exists ...
	I0815 16:15:42.389982    1970 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-899000"
	I0815 16:15:42.389998    1970 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:15:42.391451    1970 addons.go:234] Setting addon default-storageclass=true in "functional-899000"
	W0815 16:15:42.391455    1970 addons.go:243] addon default-storageclass should already be in state true
	I0815 16:15:42.391464    1970 host.go:66] Checking if "functional-899000" exists ...
	I0815 16:15:42.395537    1970 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 16:15:42.395544    1970 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 16:15:42.395553    1970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
	I0815 16:15:42.395035    1970 out.go:177] * Verifying Kubernetes components...
	I0815 16:15:42.402837    1970 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:15:42.406915    1970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:15:42.410838    1970 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 16:15:42.410842    1970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 16:15:42.410847    1970 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
	I0815 16:15:42.521670    1970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:15:42.527792    1970 node_ready.go:35] waiting up to 6m0s for node "functional-899000" to be "Ready" ...
	I0815 16:15:42.529189    1970 node_ready.go:49] node "functional-899000" has status "Ready":"True"
	I0815 16:15:42.529197    1970 node_ready.go:38] duration metric: took 1.393167ms for node "functional-899000" to be "Ready" ...
	I0815 16:15:42.529200    1970 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:15:42.531581    1970 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.534199    1970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 16:15:42.595518    1970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 16:15:42.763870    1970 pod_ready.go:93] pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:42.763876    1970 pod_ready.go:82] duration metric: took 232.296834ms for pod "coredns-6f6b679f8f-zv57l" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.763880    1970 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:42.875661    1970 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0815 16:15:42.879592    1970 addons.go:510] duration metric: took 489.732667ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0815 16:15:43.172646    1970 pod_ready.go:93] pod "etcd-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:43.172678    1970 pod_ready.go:82] duration metric: took 408.798959ms for pod "etcd-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:43.172697    1970 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:43.571614    1970 pod_ready.go:93] pod "kube-apiserver-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:43.571628    1970 pod_ready.go:82] duration metric: took 398.932042ms for pod "kube-apiserver-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:43.571638    1970 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:43.967605    1970 pod_ready.go:93] pod "kube-controller-manager-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:43.967619    1970 pod_ready.go:82] duration metric: took 395.986458ms for pod "kube-controller-manager-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:43.967628    1970 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rd5pz" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:44.371434    1970 pod_ready.go:93] pod "kube-proxy-rd5pz" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:44.371453    1970 pod_ready.go:82] duration metric: took 403.826916ms for pod "kube-proxy-rd5pz" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:44.371478    1970 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:44.769866    1970 pod_ready.go:93] pod "kube-scheduler-functional-899000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:15:44.769883    1970 pod_ready.go:82] duration metric: took 398.4055ms for pod "kube-scheduler-functional-899000" in "kube-system" namespace to be "Ready" ...
	I0815 16:15:44.769901    1970 pod_ready.go:39] duration metric: took 2.240761084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:15:44.769942    1970 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:15:44.770204    1970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:15:44.789848    1970 api_server.go:72] duration metric: took 2.400037083s to wait for apiserver process to appear ...
	I0815 16:15:44.789859    1970 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:15:44.789875    1970 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0815 16:15:44.796047    1970 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0815 16:15:44.797032    1970 api_server.go:141] control plane version: v1.31.0
	I0815 16:15:44.797043    1970 api_server.go:131] duration metric: took 7.178875ms to wait for apiserver health ...
	I0815 16:15:44.797050    1970 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:15:44.979073    1970 system_pods.go:59] 7 kube-system pods found
	I0815 16:15:44.979111    1970 system_pods.go:61] "coredns-6f6b679f8f-zv57l" [e0262857-75b4-47b3-944a-88efcc326fb6] Running
	I0815 16:15:44.979118    1970 system_pods.go:61] "etcd-functional-899000" [08931bf1-1aa4-4ec8-83dc-addd666b5a24] Running
	I0815 16:15:44.979123    1970 system_pods.go:61] "kube-apiserver-functional-899000" [e9cab163-346d-4d1b-af9e-9ede4e85cde0] Running
	I0815 16:15:44.979133    1970 system_pods.go:61] "kube-controller-manager-functional-899000" [9558470f-318f-4817-98db-508471fedeb2] Running
	I0815 16:15:44.979147    1970 system_pods.go:61] "kube-proxy-rd5pz" [21c080aa-cdff-41c6-b427-1518b9b2af2b] Running
	I0815 16:15:44.979153    1970 system_pods.go:61] "kube-scheduler-functional-899000" [82678a0a-0d8f-4c01-b5e7-f41b4f42fbe2] Running
	I0815 16:15:44.979158    1970 system_pods.go:61] "storage-provisioner" [7795fb4f-b5b9-4231-bddb-fe511c29f7aa] Running
	I0815 16:15:44.979170    1970 system_pods.go:74] duration metric: took 182.1195ms to wait for pod list to return data ...
	I0815 16:15:44.979183    1970 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:15:45.171733    1970 default_sa.go:45] found service account: "default"
	I0815 16:15:45.171764    1970 default_sa.go:55] duration metric: took 192.573792ms for default service account to be created ...
	I0815 16:15:45.171779    1970 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:15:45.377480    1970 system_pods.go:86] 7 kube-system pods found
	I0815 16:15:45.377514    1970 system_pods.go:89] "coredns-6f6b679f8f-zv57l" [e0262857-75b4-47b3-944a-88efcc326fb6] Running
	I0815 16:15:45.377523    1970 system_pods.go:89] "etcd-functional-899000" [08931bf1-1aa4-4ec8-83dc-addd666b5a24] Running
	I0815 16:15:45.377529    1970 system_pods.go:89] "kube-apiserver-functional-899000" [e9cab163-346d-4d1b-af9e-9ede4e85cde0] Running
	I0815 16:15:45.377535    1970 system_pods.go:89] "kube-controller-manager-functional-899000" [9558470f-318f-4817-98db-508471fedeb2] Running
	I0815 16:15:45.377540    1970 system_pods.go:89] "kube-proxy-rd5pz" [21c080aa-cdff-41c6-b427-1518b9b2af2b] Running
	I0815 16:15:45.377546    1970 system_pods.go:89] "kube-scheduler-functional-899000" [82678a0a-0d8f-4c01-b5e7-f41b4f42fbe2] Running
	I0815 16:15:45.377551    1970 system_pods.go:89] "storage-provisioner" [7795fb4f-b5b9-4231-bddb-fe511c29f7aa] Running
	I0815 16:15:45.377564    1970 system_pods.go:126] duration metric: took 205.782708ms to wait for k8s-apps to be running ...
	I0815 16:15:45.377575    1970 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:15:45.377795    1970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:15:45.399250    1970 system_svc.go:56] duration metric: took 21.668875ms WaitForService to wait for kubelet
	I0815 16:15:45.399269    1970 kubeadm.go:582] duration metric: took 3.009477208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:15:45.399291    1970 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:15:45.571664    1970 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:15:45.571704    1970 node_conditions.go:123] node cpu capacity is 2
	I0815 16:15:45.571735    1970 node_conditions.go:105] duration metric: took 172.437709ms to run NodePressure ...
	I0815 16:15:45.571760    1970 start.go:241] waiting for startup goroutines ...
	I0815 16:15:45.571773    1970 start.go:246] waiting for cluster config update ...
	I0815 16:15:45.571791    1970 start.go:255] writing updated cluster config ...
	I0815 16:15:45.573105    1970 ssh_runner.go:195] Run: rm -f paused
	I0815 16:15:45.637832    1970 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0815 16:15:45.641002    1970 out.go:201] 
	W0815 16:15:45.644005    1970 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0815 16:15:45.647949    1970 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0815 16:15:45.657062    1970 out.go:177] * Done! kubectl is now configured to use "functional-899000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 15 23:16:24 functional-899000 dockerd[5815]: time="2024-08-15T23:16:24.146891288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:24 functional-899000 cri-dockerd[6077]: time="2024-08-15T23:16:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94327863bc2b3f7a57df08434230861540028be32df43e5f3e06b1a0b323371d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 15 23:16:24 functional-899000 cri-dockerd[6077]: time="2024-08-15T23:16:24Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Aug 15 23:16:24 functional-899000 dockerd[5815]: time="2024-08-15T23:16:24.962569641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:16:24 functional-899000 dockerd[5815]: time="2024-08-15T23:16:24.962600102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:16:24 functional-899000 dockerd[5815]: time="2024-08-15T23:16:24.962605644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:24 functional-899000 dockerd[5815]: time="2024-08-15T23:16:24.962638356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:32 functional-899000 dockerd[5815]: time="2024-08-15T23:16:32.643681861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:16:32 functional-899000 dockerd[5815]: time="2024-08-15T23:16:32.643714488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:16:32 functional-899000 dockerd[5815]: time="2024-08-15T23:16:32.643721114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:32 functional-899000 dockerd[5815]: time="2024-08-15T23:16:32.644025556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:32 functional-899000 cri-dockerd[6077]: time="2024-08-15T23:16:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a74f26e889cacba3f1f9bcd7452ff23e9cb9f0d496b7f1d7499563d872c6c26/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 15 23:16:34 functional-899000 cri-dockerd[6077]: time="2024-08-15T23:16:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.103004115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.103091372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.103111998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.103347893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:16:34 functional-899000 dockerd[5809]: time="2024-08-15T23:16:34.136554020Z" level=info msg="ignoring event" container=ba4a045973e79c7a2373410e2ea986b5beafb77a0d80525c89258ba7532c1d6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.136655861Z" level=info msg="shim disconnected" id=ba4a045973e79c7a2373410e2ea986b5beafb77a0d80525c89258ba7532c1d6d namespace=moby
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.136683155Z" level=warning msg="cleaning up after shim disconnected" id=ba4a045973e79c7a2373410e2ea986b5beafb77a0d80525c89258ba7532c1d6d namespace=moby
	Aug 15 23:16:34 functional-899000 dockerd[5815]: time="2024-08-15T23:16:34.136686989Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:16:35 functional-899000 dockerd[5809]: time="2024-08-15T23:16:35.959858307Z" level=info msg="ignoring event" container=3a74f26e889cacba3f1f9bcd7452ff23e9cb9f0d496b7f1d7499563d872c6c26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:16:35 functional-899000 dockerd[5815]: time="2024-08-15T23:16:35.959965816Z" level=info msg="shim disconnected" id=3a74f26e889cacba3f1f9bcd7452ff23e9cb9f0d496b7f1d7499563d872c6c26 namespace=moby
	Aug 15 23:16:35 functional-899000 dockerd[5815]: time="2024-08-15T23:16:35.960027071Z" level=warning msg="cleaning up after shim disconnected" id=3a74f26e889cacba3f1f9bcd7452ff23e9cb9f0d496b7f1d7499563d872c6c26 namespace=moby
	Aug 15 23:16:35 functional-899000 dockerd[5815]: time="2024-08-15T23:16:35.960032154Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ba4a045973e79       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 seconds ago        Exited              mount-munger              0                   3a74f26e889ca       busybox-mount
	10a0dd4f9b996       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         13 seconds ago       Running             myfrontend                0                   94327863bc2b3       sp-pod
	b1e960c03ce23       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   4cdbd032d5f7f       hello-node-connect-65d86f57f4-4jl4d
	cc11f21933767       72565bf5bbedf                                                                                         22 seconds ago       Exited              echoserver-arm            2                   9aba5a1d0dbd7       hello-node-64b4f8f9ff-xqp5c
	c17878795cc7e       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         36 seconds ago       Running             nginx                     0                   ac5afdda87756       nginx-svc
	409ffb12d0b19       ba04bb24b9575                                                                                         54 seconds ago       Running             storage-provisioner       4                   c9913019f0b6e       storage-provisioner
	7784e525d5eed       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   c7eb2a7a0ab93       coredns-6f6b679f8f-zv57l
	9c83d4898d02c       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   5a77076cfe282       kube-proxy-rd5pz
	8577ef0620ef2       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       3                   c9913019f0b6e       storage-provisioner
	c85931c9c338c       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   01feee037bc10       etcd-functional-899000
	a683e1af12b9d       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   17193703c6834       kube-controller-manager-functional-899000
	c97f708578c86       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   82cb990fd90c3       kube-scheduler-functional-899000
	3ddc4a5d16c34       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   029d2e11dcf55       kube-apiserver-functional-899000
	0577f056b97f9       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   0afa47f66afab       coredns-6f6b679f8f-zv57l
	a41cb65535ecf       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   6584f0af561fc       kube-proxy-rd5pz
	2768e255f699b       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   94c60bf3cf2fe       kube-scheduler-functional-899000
	a14ceac28780f       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   772866a871da4       etcd-functional-899000
	f83a96362b68d       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   f12780093650e       kube-controller-manager-functional-899000
	
	
	==> coredns [0577f056b97f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48153 - 23017 "HINFO IN 189319954744141998.7399840250654652682. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011150346s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7784e525d5ee] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 27044 "HINFO IN 7258353465881619586.2725457974310428614. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010131319s
	[INFO] 10.244.0.1:46523 - 25589 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000113219s
	[INFO] 10.244.0.1:14718 - 49713 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.0000948s
	[INFO] 10.244.0.1:13633 - 43141 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036128s
	[INFO] 10.244.0.1:18410 - 6801 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001524427s
	[INFO] 10.244.0.1:61165 - 18514 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000145055s
	[INFO] 10.244.0.1:28746 - 53931 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000047921s
	
	
	==> describe nodes <==
	Name:               functional-899000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-899000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=functional-899000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T16_13_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:13:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-899000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:16:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:16:27 +0000   Thu, 15 Aug 2024 23:13:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:16:27 +0000   Thu, 15 Aug 2024 23:13:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:16:27 +0000   Thu, 15 Aug 2024 23:13:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:16:27 +0000   Thu, 15 Aug 2024 23:13:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-899000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 5febddd75af146acb3819e3500722074
	  System UUID:                5febddd75af146acb3819e3500722074
	  Boot ID:                    d47522a5-2bd0-49e4-91ec-59bda619a6cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-xqp5c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  default                     hello-node-connect-65d86f57f4-4jl4d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-6f6b679f8f-zv57l                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m54s
	  kube-system                 etcd-functional-899000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m
	  kube-system                 kube-apiserver-functional-899000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-functional-899000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-proxy-rd5pz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-scheduler-functional-899000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m51s              kube-proxy       
	  Normal  Starting                 69s                kube-proxy       
	  Normal  Starting                 116s               kube-proxy       
	  Normal  NodeHasSufficientMemory  3m                 kubelet          Node functional-899000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m                 kubelet          Node functional-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m                 kubelet          Node functional-899000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m                 kubelet          Starting kubelet.
	  Normal  NodeReady                2m56s              kubelet          Node functional-899000 status is now: NodeReady
	  Normal  RegisteredNode           2m55s              node-controller  Node functional-899000 event: Registered Node functional-899000 in Controller
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node functional-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node functional-899000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node functional-899000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s               node-controller  Node functional-899000 event: Registered Node functional-899000 in Controller
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node functional-899000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node functional-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node functional-899000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                node-controller  Node functional-899000 event: Registered Node functional-899000 in Controller
	
	
	==> dmesg <==
	[  +3.413941] kauditd_printk_skb: 199 callbacks suppressed
	[ +15.091644] systemd-fstab-generator[4902]: Ignoring "noauto" option for root device
	[  +0.056881] kauditd_printk_skb: 33 callbacks suppressed
	[Aug15 23:15] systemd-fstab-generator[5334]: Ignoring "noauto" option for root device
	[  +0.052553] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.093919] systemd-fstab-generator[5367]: Ignoring "noauto" option for root device
	[  +0.107126] systemd-fstab-generator[5379]: Ignoring "noauto" option for root device
	[  +0.091084] systemd-fstab-generator[5393]: Ignoring "noauto" option for root device
	[  +5.128493] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.323438] systemd-fstab-generator[6030]: Ignoring "noauto" option for root device
	[  +0.087297] systemd-fstab-generator[6042]: Ignoring "noauto" option for root device
	[  +0.093040] systemd-fstab-generator[6054]: Ignoring "noauto" option for root device
	[  +0.099055] systemd-fstab-generator[6069]: Ignoring "noauto" option for root device
	[  +0.219912] systemd-fstab-generator[6237]: Ignoring "noauto" option for root device
	[  +1.164276] systemd-fstab-generator[6358]: Ignoring "noauto" option for root device
	[  +4.416890] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.831971] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.763601] systemd-fstab-generator[7420]: Ignoring "noauto" option for root device
	[  +5.403407] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.648273] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.606219] kauditd_printk_skb: 28 callbacks suppressed
	[Aug15 23:16] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.369793] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.118345] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.947570] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a14ceac28780] <==
	{"level":"info","ts":"2024-08-15T23:14:39.739057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:14:39.739149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-15T23:14:39.739189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T23:14:39.739211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-15T23:14:39.739242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T23:14:39.739259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-15T23:14:39.744418Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-899000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:14:39.744490Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:14:39.744922Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:14:39.745115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:14:39.745038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:14:39.746780Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:14:39.746801Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:14:39.748871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T23:14:39.749438Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-15T23:15:08.619884Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T23:15:08.619926Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-899000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-15T23:15:08.619964Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:15:08.620006Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:15:08.640091Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:15:08.640120Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:15:08.641724Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-15T23:15:08.643133Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T23:15:08.643167Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T23:15:08.643170Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-899000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [c85931c9c338] <==
	{"level":"info","ts":"2024-08-15T23:15:23.600948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-15T23:15:23.601019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:15:23.600753Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:15:23.604486Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T23:15:23.603810Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:15:23.604526Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T23:15:23.611780Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T23:15:23.611855Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T23:15:23.611893Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T23:15:25.371636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-15T23:15:25.371897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-15T23:15:25.371959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-15T23:15:25.371994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-15T23:15:25.372017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-15T23:15:25.372041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-15T23:15:25.372059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-15T23:15:25.377500Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-899000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:15:25.377795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:15:25.377871Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:15:25.378653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:15:25.377921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:15:25.380936Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:15:25.381522Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:15:25.383226Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T23:15:25.384831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 23:16:37 up 3 min,  0 users,  load average: 0.56, 0.42, 0.18
	Linux functional-899000 5.10.207 #1 SMP PREEMPT Thu Aug 15 18:35:44 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3ddc4a5d16c3] <==
	I0815 23:15:25.976171       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:15:25.977948       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:15:25.984796       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 23:15:25.993004       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:15:25.993018       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:15:25.993022       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:15:25.993025       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:15:25.993027       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:15:25.995243       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:15:25.995248       1 policy_source.go:224] refreshing policies
	I0815 23:15:26.017179       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:15:26.875446       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:15:26.980125       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0815 23:15:26.980779       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:15:26.982387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 23:15:27.347292       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 23:15:27.351833       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 23:15:27.361738       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 23:15:27.368802       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 23:15:27.370685       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 23:15:47.380503       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.156.224"}
	I0815 23:15:53.657597       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0815 23:15:53.701990       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.245.209"}
	I0815 23:15:57.777752       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.71.127"}
	I0815 23:16:07.215326       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.7.169"}
	
	
	==> kube-controller-manager [a683e1af12b9] <==
	I0815 23:15:29.667371       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="18.043µs"
	I0815 23:15:29.879455       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 23:15:29.959217       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 23:15:29.959281       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 23:15:36.884539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="8.573148ms"
	I0815 23:15:36.885393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="28.503µs"
	I0815 23:15:53.664503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="4.900726ms"
	I0815 23:15:53.669695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.032238ms"
	I0815 23:15:53.669968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.168µs"
	I0815 23:15:53.670095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.168µs"
	I0815 23:15:53.674887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="9.751µs"
	I0815 23:15:59.302815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="33.752µs"
	I0815 23:16:00.317236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.585µs"
	I0815 23:16:01.330335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="25.628µs"
	I0815 23:16:07.185027       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="9.491383ms"
	I0815 23:16:07.188312       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="3.223452ms"
	I0815 23:16:07.188343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="10.917µs"
	I0815 23:16:08.457539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="63.464µs"
	I0815 23:16:09.459860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.043µs"
	I0815 23:16:16.561357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="53.212µs"
	I0815 23:16:23.747446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="174.432µs"
	I0815 23:16:24.712512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.002µs"
	I0815 23:16:27.567959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-899000"
	I0815 23:16:27.747620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="90.008µs"
	I0815 23:16:35.750690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="103.092µs"
	
	
	==> kube-controller-manager [f83a96362b68] <==
	I0815 23:14:43.620547       1 shared_informer.go:320] Caches are synced for expand
	I0815 23:14:43.621082       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-899000\" does not exist"
	I0815 23:14:43.622464       1 shared_informer.go:320] Caches are synced for attach detach
	I0815 23:14:43.623785       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0815 23:14:43.667887       1 shared_informer.go:320] Caches are synced for TTL
	I0815 23:14:43.668993       1 shared_informer.go:320] Caches are synced for taint
	I0815 23:14:43.669059       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0815 23:14:43.669107       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-899000"
	I0815 23:14:43.669144       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0815 23:14:43.693320       1 shared_informer.go:320] Caches are synced for daemon sets
	I0815 23:14:43.695450       1 shared_informer.go:320] Caches are synced for GC
	I0815 23:14:43.704541       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0815 23:14:43.719790       1 shared_informer.go:320] Caches are synced for node
	I0815 23:14:43.719811       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0815 23:14:43.719820       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0815 23:14:43.719869       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0815 23:14:43.719878       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0815 23:14:43.719910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-899000"
	I0815 23:14:43.768688       1 shared_informer.go:320] Caches are synced for cronjob
	I0815 23:14:43.821511       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 23:14:43.821678       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 23:14:43.869418       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 23:14:44.232026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 23:14:44.317504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 23:14:44.317551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [9c83d4898d02] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:15:27.260465       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:15:27.265294       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0815 23:15:27.265360       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:15:27.276195       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:15:27.276213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:15:27.276227       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:15:27.276899       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:15:27.277012       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:15:27.277020       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:15:27.277456       1 config.go:197] "Starting service config controller"
	I0815 23:15:27.277468       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:15:27.277493       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:15:27.277507       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:15:27.277740       1 config.go:326] "Starting node config controller"
	I0815 23:15:27.277756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:15:27.378159       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:15:27.378164       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:15:27.378175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a41cb65535ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:14:41.067410       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:14:41.071111       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0815 23:14:41.071138       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:14:41.078677       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:14:41.078691       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:14:41.078701       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:14:41.079283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:14:41.079368       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:14:41.079375       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:14:41.079827       1 config.go:197] "Starting service config controller"
	I0815 23:14:41.079836       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:14:41.079846       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:14:41.079849       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:14:41.080058       1 config.go:326] "Starting node config controller"
	I0815 23:14:41.080062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:14:41.180332       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:14:41.180339       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:14:41.180350       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2768e255f699] <==
	I0815 23:14:38.654064       1 serving.go:386] Generated self-signed cert in-memory
	W0815 23:14:40.265560       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 23:14:40.265577       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 23:14:40.265582       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 23:14:40.265585       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 23:14:40.289679       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:14:40.289729       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:14:40.291439       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:14:40.294675       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:14:40.299317       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:14:40.298229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:14:40.399747       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:15:08.622485       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 23:15:08.622700       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0815 23:15:08.622763       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c97f708578c8] <==
	I0815 23:15:24.090157       1 serving.go:386] Generated self-signed cert in-memory
	W0815 23:15:25.904344       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 23:15:25.904361       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 23:15:25.904366       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 23:15:25.904368       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 23:15:25.937169       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:15:25.937190       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:15:25.938152       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:15:25.938212       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:15:25.938224       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:15:25.938232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:15:26.040008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 23:16:23 functional-899000 kubelet[6365]: I0815 23:16:23.022202    6365 reconciler_common.go:288] "Volume detached for volume \"pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906\" (UniqueName: \"kubernetes.io/host-path/7f06312c-dab2-4a50-b77b-1a55c16796eb-pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906\") on node \"functional-899000\" DevicePath \"\""
	Aug 15 23:16:23 functional-899000 kubelet[6365]: I0815 23:16:23.022218    6365 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rrl9f\" (UniqueName: \"kubernetes.io/projected/7f06312c-dab2-4a50-b77b-1a55c16796eb-kube-api-access-rrl9f\") on node \"functional-899000\" DevicePath \"\""
	Aug 15 23:16:23 functional-899000 kubelet[6365]: I0815 23:16:23.732459    6365 scope.go:117] "RemoveContainer" containerID="cac97235b882744f8e627894b183df4e24df1cd86ccee175d3dee908c99eb30f"
	Aug 15 23:16:23 functional-899000 kubelet[6365]: E0815 23:16:23.799140    6365 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f06312c-dab2-4a50-b77b-1a55c16796eb" containerName="myfrontend"
	Aug 15 23:16:23 functional-899000 kubelet[6365]: I0815 23:16:23.799181    6365 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f06312c-dab2-4a50-b77b-1a55c16796eb" containerName="myfrontend"
	Aug 15 23:16:23 functional-899000 kubelet[6365]: I0815 23:16:23.928960    6365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906\" (UniqueName: \"kubernetes.io/host-path/c2a08fdb-4091-463e-a857-94455ae5e57a-pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906\") pod \"sp-pod\" (UID: \"c2a08fdb-4091-463e-a857-94455ae5e57a\") " pod="default/sp-pod"
	Aug 15 23:16:23 functional-899000 kubelet[6365]: I0815 23:16:23.928988    6365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dnc6\" (UniqueName: \"kubernetes.io/projected/c2a08fdb-4091-463e-a857-94455ae5e57a-kube-api-access-9dnc6\") pod \"sp-pod\" (UID: \"c2a08fdb-4091-463e-a857-94455ae5e57a\") " pod="default/sp-pod"
	Aug 15 23:16:24 functional-899000 kubelet[6365]: I0815 23:16:24.704657    6365 scope.go:117] "RemoveContainer" containerID="cac97235b882744f8e627894b183df4e24df1cd86ccee175d3dee908c99eb30f"
	Aug 15 23:16:24 functional-899000 kubelet[6365]: I0815 23:16:24.704849    6365 scope.go:117] "RemoveContainer" containerID="b1e960c03ce23a48b257f1dcb01e0b0cd28e996604237b525eab0d2355a9c7bc"
	Aug 15 23:16:24 functional-899000 kubelet[6365]: E0815 23:16:24.704924    6365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-4jl4d_default(8c2e5f18-a7be-4a96-b5ec-5279201fa301)\"" pod="default/hello-node-connect-65d86f57f4-4jl4d" podUID="8c2e5f18-a7be-4a96-b5ec-5279201fa301"
	Aug 15 23:16:24 functional-899000 kubelet[6365]: I0815 23:16:24.735380    6365 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f06312c-dab2-4a50-b77b-1a55c16796eb" path="/var/lib/kubelet/pods/7f06312c-dab2-4a50-b77b-1a55c16796eb/volumes"
	Aug 15 23:16:27 functional-899000 kubelet[6365]: I0815 23:16:27.733424    6365 scope.go:117] "RemoveContainer" containerID="cc11f21933767657ca489656cfef241e11e7eee8c767ece0800d2d1a138f531f"
	Aug 15 23:16:27 functional-899000 kubelet[6365]: E0815 23:16:27.733769    6365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-xqp5c_default(8934fa28-b843-4249-b4ca-ac02080088d2)\"" pod="default/hello-node-64b4f8f9ff-xqp5c" podUID="8934fa28-b843-4249-b4ca-ac02080088d2"
	Aug 15 23:16:27 functional-899000 kubelet[6365]: I0815 23:16:27.746842    6365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.013951005 podStartE2EDuration="4.746806989s" podCreationTimestamp="2024-08-15 23:16:23 +0000 UTC" firstStartedPulling="2024-08-15 23:16:24.203192903 +0000 UTC m=+61.517321039" lastFinishedPulling="2024-08-15 23:16:24.936048887 +0000 UTC m=+62.250177023" observedRunningTime="2024-08-15 23:16:25.748637468 +0000 UTC m=+63.062765604" watchObservedRunningTime="2024-08-15 23:16:27.746806989 +0000 UTC m=+65.060935166"
	Aug 15 23:16:32 functional-899000 kubelet[6365]: I0815 23:16:32.419193    6365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-test-volume\") pod \"busybox-mount\" (UID: \"d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b\") " pod="default/busybox-mount"
	Aug 15 23:16:32 functional-899000 kubelet[6365]: I0815 23:16:32.419225    6365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2fh\" (UniqueName: \"kubernetes.io/projected/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-kube-api-access-gq2fh\") pod \"busybox-mount\" (UID: \"d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b\") " pod="default/busybox-mount"
	Aug 15 23:16:35 functional-899000 kubelet[6365]: I0815 23:16:35.733035    6365 scope.go:117] "RemoveContainer" containerID="b1e960c03ce23a48b257f1dcb01e0b0cd28e996604237b525eab0d2355a9c7bc"
	Aug 15 23:16:35 functional-899000 kubelet[6365]: E0815 23:16:35.733400    6365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-4jl4d_default(8c2e5f18-a7be-4a96-b5ec-5279201fa301)\"" pod="default/hello-node-connect-65d86f57f4-4jl4d" podUID="8c2e5f18-a7be-4a96-b5ec-5279201fa301"
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.157598    6365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2fh\" (UniqueName: \"kubernetes.io/projected/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-kube-api-access-gq2fh\") pod \"d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b\" (UID: \"d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b\") "
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.157678    6365 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-test-volume\") pod \"d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b\" (UID: \"d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b\") "
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.157749    6365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-test-volume" (OuterVolumeSpecName: "test-volume") pod "d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b" (UID: "d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.158660    6365 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-kube-api-access-gq2fh" (OuterVolumeSpecName: "kube-api-access-gq2fh") pod "d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b" (UID: "d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b"). InnerVolumeSpecName "kube-api-access-gq2fh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.258855    6365 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gq2fh\" (UniqueName: \"kubernetes.io/projected/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-kube-api-access-gq2fh\") on node \"functional-899000\" DevicePath \"\""
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.258888    6365 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b-test-volume\") on node \"functional-899000\" DevicePath \"\""
	Aug 15 23:16:36 functional-899000 kubelet[6365]: I0815 23:16:36.896179    6365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a74f26e889cacba3f1f9bcd7452ff23e9cb9f0d496b7f1d7499563d872c6c26"
	
	
	==> storage-provisioner [409ffb12d0b1] <==
	I0815 23:15:43.815581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 23:15:43.820882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 23:15:43.820900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 23:16:01.212374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 23:16:01.212444       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-899000_c1c04fd0-9e61-4c7c-8a15-2d08140a3e09!
	I0815 23:16:01.212774       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7128668-9ca0-4a87-b7b2-a0f3f93bed6b", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-899000_c1c04fd0-9e61-4c7c-8a15-2d08140a3e09 became leader
	I0815 23:16:01.313449       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-899000_c1c04fd0-9e61-4c7c-8a15-2d08140a3e09!
	I0815 23:16:10.391184       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0815 23:16:10.391646       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3b6e58c5-1148-4231-a6c3-f2d09af3a906", APIVersion:"v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0815 23:16:10.391280       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    5de74a18-6ee6-473a-8c9d-c338eefb81a4 342 0 2024-08-15 23:13:43 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-15 23:13:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3b6e58c5-1148-4231-a6c3-f2d09af3a906 750 0 2024-08-15 23:16:10 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-15 23:16:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-15 23:16:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0815 23:16:10.392633       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906" provisioned
	I0815 23:16:10.392662       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0815 23:16:10.392667       1 volume_store.go:212] Trying to save persistentvolume "pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906"
	I0815 23:16:10.396654       1 volume_store.go:219] persistentvolume "pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906" saved
	I0815 23:16:10.396895       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3b6e58c5-1148-4231-a6c3-f2d09af3a906", APIVersion:"v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3b6e58c5-1148-4231-a6c3-f2d09af3a906
	
	
	==> storage-provisioner [8577ef0620ef] <==
	I0815 23:15:27.195961       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 23:15:27.197268       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-899000 -n functional-899000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-899000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-899000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-899000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-899000/192.168.105.4
	Start Time:       Thu, 15 Aug 2024 16:16:32 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://ba4a045973e79c7a2373410e2ea986b5beafb77a0d80525c89258ba7532c1d6d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 15 Aug 2024 16:16:34 -0700
	      Finished:     Thu, 15 Aug 2024 16:16:34 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gq2fh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gq2fh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5s    default-scheduler  Successfully assigned default/busybox-mount to functional-899000
	  Normal  Pulling    5s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.374s (1.374s including waiting). Image size: 3547125 bytes.
	  Normal  Created    3s    kubelet            Created container mount-munger
	  Normal  Started    3s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 node stop m02 -v=7 --alsologtostderr
E0815 16:20:54.151527    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:20:54.795151    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:20:56.077508    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:20:58.640844    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:21:03.764131    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-719000 node stop m02 -v=7 --alsologtostderr: (12.186431042s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr
E0815 16:21:14.005473    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:21:34.488446    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:15.450734    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:23:37.387687    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:23:56.899221    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr: exit status 7 (3m45.062496083s)

                                                
                                                
-- stdout --
	ha-719000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-719000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-719000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-719000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:21:06.132933    2569 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:21:06.133122    2569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:21:06.133126    2569 out.go:358] Setting ErrFile to fd 2...
	I0815 16:21:06.133129    2569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:21:06.133297    2569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:21:06.133441    2569 out.go:352] Setting JSON to false
	I0815 16:21:06.133459    2569 mustload.go:65] Loading cluster: ha-719000
	I0815 16:21:06.133490    2569 notify.go:220] Checking for updates...
	I0815 16:21:06.133719    2569 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:21:06.133726    2569 status.go:255] checking status of ha-719000 ...
	I0815 16:21:06.134631    2569 status.go:330] ha-719000 host status = "Running" (err=<nil>)
	I0815 16:21:06.134639    2569 host.go:66] Checking if "ha-719000" exists ...
	I0815 16:21:06.134746    2569 host.go:66] Checking if "ha-719000" exists ...
	I0815 16:21:06.134867    2569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:21:06.134878    2569 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/id_rsa Username:docker}
	W0815 16:22:21.135055    2569 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0815 16:22:21.135136    2569 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 16:22:21.135145    2569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 16:22:21.135149    2569 status.go:257] ha-719000 status: &{Name:ha-719000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 16:22:21.135162    2569 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 16:22:21.135165    2569 status.go:255] checking status of ha-719000-m02 ...
	I0815 16:22:21.135381    2569 status.go:330] ha-719000-m02 host status = "Stopped" (err=<nil>)
	I0815 16:22:21.135386    2569 status.go:343] host is not running, skipping remaining checks
	I0815 16:22:21.135392    2569 status.go:257] ha-719000-m02 status: &{Name:ha-719000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:22:21.135396    2569 status.go:255] checking status of ha-719000-m03 ...
	I0815 16:22:21.136093    2569 status.go:330] ha-719000-m03 host status = "Running" (err=<nil>)
	I0815 16:22:21.136099    2569 host.go:66] Checking if "ha-719000-m03" exists ...
	I0815 16:22:21.136194    2569 host.go:66] Checking if "ha-719000-m03" exists ...
	I0815 16:22:21.136312    2569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:22:21.136320    2569 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m03/id_rsa Username:docker}
	W0815 16:23:36.152593    2569 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0815 16:23:36.152640    2569 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0815 16:23:36.152660    2569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 16:23:36.152664    2569 status.go:257] ha-719000-m03 status: &{Name:ha-719000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 16:23:36.152674    2569 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 16:23:36.152678    2569 status.go:255] checking status of ha-719000-m04 ...
	I0815 16:23:36.153345    2569 status.go:330] ha-719000-m04 host status = "Running" (err=<nil>)
	I0815 16:23:36.153351    2569 host.go:66] Checking if "ha-719000-m04" exists ...
	I0815 16:23:36.153441    2569 host.go:66] Checking if "ha-719000-m04" exists ...
	I0815 16:23:36.154657    2569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:23:36.154664    2569 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m04/id_rsa Username:docker}
	W0815 16:24:51.170904    2569 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0815 16:24:51.170945    2569 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0815 16:24:51.170953    2569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0815 16:24:51.170957    2569 status.go:257] ha-719000-m04 status: &{Name:ha-719000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 16:24:51.170965    2569 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr": ha-719000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-719000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-719000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-719000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr": ha-719000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-719000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-719000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-719000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr": ha-719000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-719000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-719000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-719000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
E0815 16:25:53.525373    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 3 (1m15.043375083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 16:26:06.209187    2592 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 16:26:06.209239    2592 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0815 16:26:21.241986    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.099889958s)
ha_test.go:413: expected profile "ha-719000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-719000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-719000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-719000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
E0815 16:28:56.898952    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 3 (1m15.037800583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 16:29:51.346949    2625 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 16:29:51.346961    2625 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.081103625s)

                                                
                                                
-- stdout --
	* Starting "ha-719000-m02" control-plane node in "ha-719000" cluster
	* Restarting existing qemu2 VM for "ha-719000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-719000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:29:51.380296    2631 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:51.380572    2631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:51.380576    2631 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:51.380578    2631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:51.380700    2631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:29:51.380936    2631 mustload.go:65] Loading cluster: ha-719000
	I0815 16:29:51.381164    2631 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0815 16:29:51.381388    2631 host.go:58] "ha-719000-m02" host status: Stopped
	I0815 16:29:51.385036    2631 out.go:177] * Starting "ha-719000-m02" control-plane node in "ha-719000" cluster
	I0815 16:29:51.388983    2631 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:29:51.388997    2631 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:29:51.389005    2631 cache.go:56] Caching tarball of preloaded images
	I0815 16:29:51.389073    2631 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:29:51.389078    2631 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:29:51.389134    2631 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/ha-719000/config.json ...
	I0815 16:29:51.389419    2631 start.go:360] acquireMachinesLock for ha-719000-m02: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:29:51.389459    2631 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "ha-719000-m02"
	I0815 16:29:51.389468    2631 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:29:51.389472    2631 fix.go:54] fixHost starting: m02
	I0815 16:29:51.389602    2631 fix.go:112] recreateIfNeeded on ha-719000-m02: state=Stopped err=<nil>
	W0815 16:29:51.389607    2631 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:29:51.392925    2631 out.go:177] * Restarting existing qemu2 VM for "ha-719000-m02" ...
	I0815 16:29:51.396965    2631 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:29:51.396999    2631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e6:93:18:ce:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/disk.qcow2
	I0815 16:29:51.399496    2631 main.go:141] libmachine: STDOUT: 
	I0815 16:29:51.399511    2631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:29:51.399534    2631 fix.go:56] duration metric: took 10.061833ms for fixHost
	I0815 16:29:51.399538    2631 start.go:83] releasing machines lock for "ha-719000-m02", held for 10.07525ms
	W0815 16:29:51.399545    2631 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:29:51.399575    2631 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:29:51.399579    2631 start.go:729] Will try again in 5 seconds ...
	I0815 16:29:56.400094    2631 start.go:360] acquireMachinesLock for ha-719000-m02: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:29:56.400200    2631 start.go:364] duration metric: took 86.708µs to acquireMachinesLock for "ha-719000-m02"
	I0815 16:29:56.400243    2631 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:29:56.400248    2631 fix.go:54] fixHost starting: m02
	I0815 16:29:56.400416    2631 fix.go:112] recreateIfNeeded on ha-719000-m02: state=Stopped err=<nil>
	W0815 16:29:56.400421    2631 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:29:56.404425    2631 out.go:177] * Restarting existing qemu2 VM for "ha-719000-m02" ...
	I0815 16:29:56.408436    2631 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:29:56.408477    2631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e6:93:18:ce:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/disk.qcow2
	I0815 16:29:56.410513    2631 main.go:141] libmachine: STDOUT: 
	I0815 16:29:56.410532    2631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:29:56.410550    2631 fix.go:56] duration metric: took 10.302416ms for fixHost
	I0815 16:29:56.410554    2631 start.go:83] releasing machines lock for "ha-719000-m02", held for 10.347709ms
	W0815 16:29:56.410610    2631 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:29:56.414504    2631 out.go:201] 
	W0815 16:29:56.418441    2631 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:29:56.418448    2631 out.go:270] * 
	* 
	W0815 16:29:56.420205    2631 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:29:56.424499    2631 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0815 16:29:51.380296    2631 out.go:345] Setting OutFile to fd 1 ...
I0815 16:29:51.380572    2631 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:29:51.380576    2631 out.go:358] Setting ErrFile to fd 2...
I0815 16:29:51.380578    2631 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:29:51.380700    2631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:29:51.380936    2631 mustload.go:65] Loading cluster: ha-719000
I0815 16:29:51.381164    2631 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0815 16:29:51.381388    2631 host.go:58] "ha-719000-m02" host status: Stopped
I0815 16:29:51.385036    2631 out.go:177] * Starting "ha-719000-m02" control-plane node in "ha-719000" cluster
I0815 16:29:51.388983    2631 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0815 16:29:51.388997    2631 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0815 16:29:51.389005    2631 cache.go:56] Caching tarball of preloaded images
I0815 16:29:51.389073    2631 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0815 16:29:51.389078    2631 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0815 16:29:51.389134    2631 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/ha-719000/config.json ...
I0815 16:29:51.389419    2631 start.go:360] acquireMachinesLock for ha-719000-m02: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0815 16:29:51.389459    2631 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "ha-719000-m02"
I0815 16:29:51.389468    2631 start.go:96] Skipping create...Using existing machine configuration
I0815 16:29:51.389472    2631 fix.go:54] fixHost starting: m02
I0815 16:29:51.389602    2631 fix.go:112] recreateIfNeeded on ha-719000-m02: state=Stopped err=<nil>
W0815 16:29:51.389607    2631 fix.go:138] unexpected machine state, will restart: <nil>
I0815 16:29:51.392925    2631 out.go:177] * Restarting existing qemu2 VM for "ha-719000-m02" ...
I0815 16:29:51.396965    2631 qemu.go:418] Using hvf for hardware acceleration
I0815 16:29:51.396999    2631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e6:93:18:ce:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/disk.qcow2
I0815 16:29:51.399496    2631 main.go:141] libmachine: STDOUT: 
I0815 16:29:51.399511    2631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0815 16:29:51.399534    2631 fix.go:56] duration metric: took 10.061833ms for fixHost
I0815 16:29:51.399538    2631 start.go:83] releasing machines lock for "ha-719000-m02", held for 10.07525ms
W0815 16:29:51.399545    2631 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0815 16:29:51.399575    2631 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0815 16:29:51.399579    2631 start.go:729] Will try again in 5 seconds ...
I0815 16:29:56.400094    2631 start.go:360] acquireMachinesLock for ha-719000-m02: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0815 16:29:56.400200    2631 start.go:364] duration metric: took 86.708µs to acquireMachinesLock for "ha-719000-m02"
I0815 16:29:56.400243    2631 start.go:96] Skipping create...Using existing machine configuration
I0815 16:29:56.400248    2631 fix.go:54] fixHost starting: m02
I0815 16:29:56.400416    2631 fix.go:112] recreateIfNeeded on ha-719000-m02: state=Stopped err=<nil>
W0815 16:29:56.400421    2631 fix.go:138] unexpected machine state, will restart: <nil>
I0815 16:29:56.404425    2631 out.go:177] * Restarting existing qemu2 VM for "ha-719000-m02" ...
I0815 16:29:56.408436    2631 qemu.go:418] Using hvf for hardware acceleration
I0815 16:29:56.408477    2631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e6:93:18:ce:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/disk.qcow2
I0815 16:29:56.410513    2631 main.go:141] libmachine: STDOUT: 
I0815 16:29:56.410532    2631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0815 16:29:56.410550    2631 fix.go:56] duration metric: took 10.302416ms for fixHost
I0815 16:29:56.410554    2631 start.go:83] releasing machines lock for "ha-719000-m02", held for 10.347709ms
W0815 16:29:56.410610    2631 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0815 16:29:56.414504    2631 out.go:201] 
W0815 16:29:56.418441    2631 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0815 16:29:56.418448    2631 out.go:270] * 
* 
W0815 16:29:56.420205    2631 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0815 16:29:56.424499    2631 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-719000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr
E0815 16:30:19.994079    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:30:53.519124    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr: exit status 7 (3m45.048869875s)

                                                
                                                
-- stdout --
	ha-719000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-719000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-719000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-719000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:29:56.460874    2635 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:56.461409    2635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:56.461422    2635 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:56.461429    2635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:56.462147    2635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:29:56.462288    2635 out.go:352] Setting JSON to false
	I0815 16:29:56.462309    2635 mustload.go:65] Loading cluster: ha-719000
	I0815 16:29:56.462327    2635 notify.go:220] Checking for updates...
	I0815 16:29:56.462521    2635 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:29:56.462527    2635 status.go:255] checking status of ha-719000 ...
	I0815 16:29:56.463308    2635 status.go:330] ha-719000 host status = "Running" (err=<nil>)
	I0815 16:29:56.463316    2635 host.go:66] Checking if "ha-719000" exists ...
	I0815 16:29:56.463418    2635 host.go:66] Checking if "ha-719000" exists ...
	I0815 16:29:56.463529    2635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:29:56.463537    2635 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/id_rsa Username:docker}
	W0815 16:31:11.462007    2635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0815 16:31:11.462271    2635 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 16:31:11.462312    2635 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 16:31:11.462329    2635 status.go:257] ha-719000 status: &{Name:ha-719000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 16:31:11.462366    2635 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 16:31:11.462384    2635 status.go:255] checking status of ha-719000-m02 ...
	I0815 16:31:11.463114    2635 status.go:330] ha-719000-m02 host status = "Stopped" (err=<nil>)
	I0815 16:31:11.463135    2635 status.go:343] host is not running, skipping remaining checks
	I0815 16:31:11.463146    2635 status.go:257] ha-719000-m02 status: &{Name:ha-719000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:31:11.463167    2635 status.go:255] checking status of ha-719000-m03 ...
	I0815 16:31:11.465559    2635 status.go:330] ha-719000-m03 host status = "Running" (err=<nil>)
	I0815 16:31:11.465588    2635 host.go:66] Checking if "ha-719000-m03" exists ...
	I0815 16:31:11.466150    2635 host.go:66] Checking if "ha-719000-m03" exists ...
	I0815 16:31:11.466704    2635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:31:11.466733    2635 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m03/id_rsa Username:docker}
	W0815 16:32:26.467565    2635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0815 16:32:26.467773    2635 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0815 16:32:26.467816    2635 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 16:32:26.467834    2635 status.go:257] ha-719000-m03 status: &{Name:ha-719000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 16:32:26.467875    2635 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 16:32:26.467904    2635 status.go:255] checking status of ha-719000-m04 ...
	I0815 16:32:26.470919    2635 status.go:330] ha-719000-m04 host status = "Running" (err=<nil>)
	I0815 16:32:26.470947    2635 host.go:66] Checking if "ha-719000-m04" exists ...
	I0815 16:32:26.471469    2635 host.go:66] Checking if "ha-719000-m04" exists ...
	I0815 16:32:26.472032    2635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:32:26.472064    2635 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m04/id_rsa Username:docker}
	W0815 16:33:41.472243    2635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0815 16:33:41.472295    2635 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0815 16:33:41.472303    2635 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0815 16:33:41.472307    2635 status.go:257] ha-719000-m04 status: &{Name:ha-719000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 16:33:41.472316    2635 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
E0815 16:33:56.892698    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 3 (1m15.04103475s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 16:34:56.509358    2971 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 16:34:56.509390    2971 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-719000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-719000 -v=7 --alsologtostderr
E0815 16:38:56.886327    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:40:53.636965    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-719000 -v=7 --alsologtostderr: (5m27.175663s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-719000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-719000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.21231475s)

                                                
                                                
-- stdout --
	* [ha-719000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-719000" primary control-plane node in "ha-719000" cluster
	* Restarting existing qemu2 VM for "ha-719000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-719000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:42:54.025527    3122 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:42:54.025743    3122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:54.025747    3122 out.go:358] Setting ErrFile to fd 2...
	I0815 16:42:54.025750    3122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:54.025934    3122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:42:54.027274    3122 out.go:352] Setting JSON to false
	I0815 16:42:54.046688    3122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2542,"bootTime":1723762832,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:42:54.046766    3122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:42:54.051930    3122 out.go:177] * [ha-719000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:42:54.058924    3122 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:42:54.058977    3122 notify.go:220] Checking for updates...
	I0815 16:42:54.066841    3122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:42:54.070836    3122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:42:54.073765    3122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:42:54.076847    3122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:42:54.079867    3122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:42:54.081535    3122 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:42:54.081586    3122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:42:54.085883    3122 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:42:54.092784    3122 start.go:297] selected driver: qemu2
	I0815 16:42:54.092793    3122 start.go:901] validating driver "qemu2" against &{Name:ha-719000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-719000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:42:54.092894    3122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:42:54.095440    3122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:42:54.095468    3122 cni.go:84] Creating CNI manager for ""
	I0815 16:42:54.095474    3122 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:42:54.095533    3122 start.go:340] cluster config:
	{Name:ha-719000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-719000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:42:54.099533    3122 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:42:54.107892    3122 out.go:177] * Starting "ha-719000" primary control-plane node in "ha-719000" cluster
	I0815 16:42:54.111812    3122 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:42:54.111828    3122 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:42:54.111840    3122 cache.go:56] Caching tarball of preloaded images
	I0815 16:42:54.111927    3122 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:42:54.111933    3122 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:42:54.112008    3122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/ha-719000/config.json ...
	I0815 16:42:54.112459    3122 start.go:360] acquireMachinesLock for ha-719000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:42:54.112494    3122 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "ha-719000"
	I0815 16:42:54.112505    3122 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:42:54.112512    3122 fix.go:54] fixHost starting: 
	I0815 16:42:54.112631    3122 fix.go:112] recreateIfNeeded on ha-719000: state=Stopped err=<nil>
	W0815 16:42:54.112639    3122 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:42:54.116859    3122 out.go:177] * Restarting existing qemu2 VM for "ha-719000" ...
	I0815 16:42:54.128221    3122 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:42:54.128295    3122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f5:50:a5:e1:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/disk.qcow2
	I0815 16:42:54.130374    3122 main.go:141] libmachine: STDOUT: 
	I0815 16:42:54.130394    3122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:42:54.130424    3122 fix.go:56] duration metric: took 17.912959ms for fixHost
	I0815 16:42:54.130429    3122 start.go:83] releasing machines lock for "ha-719000", held for 17.930208ms
	W0815 16:42:54.130435    3122 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:42:54.130468    3122 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:42:54.130473    3122 start.go:729] Will try again in 5 seconds ...
	I0815 16:42:59.132685    3122 start.go:360] acquireMachinesLock for ha-719000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:42:59.133174    3122 start.go:364] duration metric: took 389.666µs to acquireMachinesLock for "ha-719000"
	I0815 16:42:59.133374    3122 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:42:59.133395    3122 fix.go:54] fixHost starting: 
	I0815 16:42:59.134123    3122 fix.go:112] recreateIfNeeded on ha-719000: state=Stopped err=<nil>
	W0815 16:42:59.134155    3122 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:42:59.138576    3122 out.go:177] * Restarting existing qemu2 VM for "ha-719000" ...
	I0815 16:42:59.141488    3122 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:42:59.141703    3122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f5:50:a5:e1:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/disk.qcow2
	I0815 16:42:59.150334    3122 main.go:141] libmachine: STDOUT: 
	I0815 16:42:59.150392    3122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:42:59.150460    3122 fix.go:56] duration metric: took 17.06325ms for fixHost
	I0815 16:42:59.150479    3122 start.go:83] releasing machines lock for "ha-719000", held for 17.239459ms
	W0815 16:42:59.150663    3122 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:42:59.156548    3122 out.go:201] 
	W0815 16:42:59.160472    3122 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:42:59.160493    3122 out.go:270] * 
	* 
	W0815 16:42:59.163162    3122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:42:59.174535    3122 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-719000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-719000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 7 (34.664333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.590291ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-719000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-719000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:42:59.317296    3135 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:42:59.317573    3135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:59.317580    3135 out.go:358] Setting ErrFile to fd 2...
	I0815 16:42:59.317582    3135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:59.317708    3135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:42:59.317928    3135 mustload.go:65] Loading cluster: ha-719000
	I0815 16:42:59.318190    3135 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0815 16:42:59.318533    3135 out.go:270] ! The control-plane node ha-719000 host is not running (will try others): state=Stopped
	! The control-plane node ha-719000 host is not running (will try others): state=Stopped
	W0815 16:42:59.318639    3135 out.go:270] ! The control-plane node ha-719000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-719000-m02 host is not running (will try others): state=Stopped
	I0815 16:42:59.322816    3135 out.go:177] * The control-plane node ha-719000-m03 host is not running: state=Stopped
	I0815 16:42:59.324052    3135 out.go:177]   To start a cluster, run: "minikube start -p ha-719000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-719000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr: exit status 7 (30.974875ms)

                                                
                                                
-- stdout --
	ha-719000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-719000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-719000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-719000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:42:59.355968    3137 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:42:59.356343    3137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:59.356348    3137 out.go:358] Setting ErrFile to fd 2...
	I0815 16:42:59.356351    3137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:59.356541    3137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:42:59.356702    3137 out.go:352] Setting JSON to false
	I0815 16:42:59.356714    3137 mustload.go:65] Loading cluster: ha-719000
	I0815 16:42:59.356756    3137 notify.go:220] Checking for updates...
	I0815 16:42:59.357178    3137 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:42:59.357185    3137 status.go:255] checking status of ha-719000 ...
	I0815 16:42:59.357376    3137 status.go:330] ha-719000 host status = "Stopped" (err=<nil>)
	I0815 16:42:59.357380    3137 status.go:343] host is not running, skipping remaining checks
	I0815 16:42:59.357382    3137 status.go:257] ha-719000 status: &{Name:ha-719000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:42:59.357392    3137 status.go:255] checking status of ha-719000-m02 ...
	I0815 16:42:59.357486    3137 status.go:330] ha-719000-m02 host status = "Stopped" (err=<nil>)
	I0815 16:42:59.357490    3137 status.go:343] host is not running, skipping remaining checks
	I0815 16:42:59.357492    3137 status.go:257] ha-719000-m02 status: &{Name:ha-719000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:42:59.357496    3137 status.go:255] checking status of ha-719000-m03 ...
	I0815 16:42:59.357584    3137 status.go:330] ha-719000-m03 host status = "Stopped" (err=<nil>)
	I0815 16:42:59.357587    3137 status.go:343] host is not running, skipping remaining checks
	I0815 16:42:59.357589    3137 status.go:257] ha-719000-m03 status: &{Name:ha-719000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:42:59.357593    3137 status.go:255] checking status of ha-719000-m04 ...
	I0815 16:42:59.357689    3137 status.go:330] ha-719000-m04 host status = "Stopped" (err=<nil>)
	I0815 16:42:59.357692    3137 status.go:343] host is not running, skipping remaining checks
	I0815 16:42:59.357694    3137 status.go:257] ha-719000-m04 status: &{Name:ha-719000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 7 (30.298625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-719000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-719000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-719000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-719000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 7 (30.16625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (227.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 stop -v=7 --alsologtostderr
E0815 16:43:57.016910    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:45:53.640488    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 stop -v=7 --alsologtostderr: signal: killed (3m47.266105042s)

                                                
                                                
-- stdout --
	* Stopping node "ha-719000-m04"  ...
	* Stopping node "ha-719000-m03"  ...
	* Stopping node "ha-719000-m02"  ...
	* Stopping node "ha-719000"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:42:59.495386    3146 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:42:59.495522    3146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:59.495525    3146 out.go:358] Setting ErrFile to fd 2...
	I0815 16:42:59.495527    3146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:42:59.495648    3146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:42:59.495882    3146 out.go:352] Setting JSON to false
	I0815 16:42:59.495980    3146 mustload.go:65] Loading cluster: ha-719000
	I0815 16:42:59.496184    3146 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:42:59.496237    3146 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/ha-719000/config.json ...
	I0815 16:42:59.496478    3146 mustload.go:65] Loading cluster: ha-719000
	I0815 16:42:59.496570    3146 config.go:182] Loaded profile config "ha-719000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:42:59.496587    3146 stop.go:39] StopHost: ha-719000-m04
	I0815 16:42:59.500798    3146 out.go:177] * Stopping node "ha-719000-m04"  ...
	I0815 16:42:59.506777    3146 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 16:42:59.506814    3146 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 16:42:59.506825    3146 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m04/id_rsa Username:docker}
	W0815 16:44:14.509652    3146 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0815 16:44:14.510007    3146 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0815 16:44:14.510171    3146 main.go:141] libmachine: Stopping "ha-719000-m04"...
	I0815 16:44:14.510344    3146 stop.go:66] stop err: Machine "ha-719000-m04" is already stopped.
	I0815 16:44:14.510374    3146 stop.go:69] host is already stopped
	I0815 16:44:14.510401    3146 stop.go:39] StopHost: ha-719000-m03
	I0815 16:44:14.515359    3146 out.go:177] * Stopping node "ha-719000-m03"  ...
	I0815 16:44:14.522311    3146 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 16:44:14.522448    3146 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 16:44:14.522478    3146 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m03/id_rsa Username:docker}
	W0815 16:45:29.526484    3146 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0815 16:45:29.526700    3146 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 16:45:29.526766    3146 main.go:141] libmachine: Stopping "ha-719000-m03"...
	I0815 16:45:29.526915    3146 stop.go:66] stop err: Machine "ha-719000-m03" is already stopped.
	I0815 16:45:29.526943    3146 stop.go:69] host is already stopped
	I0815 16:45:29.526974    3146 stop.go:39] StopHost: ha-719000-m02
	I0815 16:45:29.536767    3146 out.go:177] * Stopping node "ha-719000-m02"  ...
	I0815 16:45:29.540711    3146 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 16:45:29.540853    3146 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 16:45:29.540885    3146 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000-m02/id_rsa Username:docker}
	W0815 16:46:44.542393    3146 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.6:22: connect: operation timed out
	W0815 16:46:44.542596    3146 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.6:22: connect: operation timed out
	I0815 16:46:44.542662    3146 main.go:141] libmachine: Stopping "ha-719000-m02"...
	I0815 16:46:44.542815    3146 stop.go:66] stop err: Machine "ha-719000-m02" is already stopped.
	I0815 16:46:44.542842    3146 stop.go:69] host is already stopped
	I0815 16:46:44.542868    3146 stop.go:39] StopHost: ha-719000
	I0815 16:46:44.548665    3146 out.go:177] * Stopping node "ha-719000"  ...
	I0815 16:46:44.552666    3146 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 16:46:44.552809    3146 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 16:46:44.552840    3146 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/ha-719000/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-719000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr: context deadline exceeded (2.333µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-719000 -n ha-719000: exit status 7 (72.457667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-719000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (227.34s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-926000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-926000 --driver=qemu2 : exit status 80 (9.843910958s)

                                                
                                                
-- stdout --
	* [image-926000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-926000" primary control-plane node in "image-926000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-926000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-926000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-926000 -n image-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-926000 -n image-926000: exit status 7 (67.424834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-801000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0815 16:47:00.119611    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-801000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.897739917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d25a67bc-8953-417d-893f-db33989dbe31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-801000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20215427-6c54-4168-949f-20b85d3fb8ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19452"}}
	{"specversion":"1.0","id":"b02ad025-b140-435a-96ea-16e91ee24f9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig"}}
	{"specversion":"1.0","id":"b991485c-b8c2-416a-afa7-f599f3e25fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b34ef3f9-4fab-4ad5-b167-eb27db8c3273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ddbbd813-2d5a-48b5-b456-46eb9eb63666","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube"}}
	{"specversion":"1.0","id":"0dfd3083-ff29-4dd5-8545-adcc6604d818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3a78e42-1bd4-4ba5-9d23-6745493ca5ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"99f463cc-32a3-4d4e-90c9-d00de7a2c887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b53ebafd-1c34-4b05-9b04-af42fe6bba79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-801000\" primary control-plane node in \"json-output-801000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e0b3bfa-b847-4c54-b73d-b166cf4c2ed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f97200fd-be07-46e8-9321-afdc91335ff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-801000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"4601ef32-2374-449c-b81f-db0f5f878c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"cf11db9d-e141-4eab-8e39-d8ee3452f83f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"27f867d0-0132-42fe-ba6b-6395520e7164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-801000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5ce84ee1-dc44-4eed-989c-1d0eb2b73653","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"1efe99e8-29b3-49ec-b755-0cbde3c803b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-801000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.90s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-801000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-801000 --output=json --user=testUser: exit status 83 (76.900875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a1c0c15-4d53-477f-ba79-024533724a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-801000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"2c699d14-2bf6-4497-85ee-6007765057a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-801000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-801000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-801000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-801000 --output=json --user=testUser: exit status 83 (45.286708ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-801000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-801000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-801000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-801000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-432000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-432000 --driver=qemu2 : exit status 80 (9.916339458s)

                                                
                                                
-- stdout --
	* [first-432000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-432000" primary control-plane node in "first-432000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-432000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-432000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-432000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-15 16:47:21.014549 -0700 PDT m=+2524.947848709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-434000 -n second-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-434000 -n second-434000: exit status 85 (80.250208ms)

                                                
                                                
-- stdout --
	* Profile "second-434000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-434000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-434000" host is not running, skipping log retrieval (state="* Profile \"second-434000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-434000\"")
helpers_test.go:175: Cleaning up "second-434000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-434000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-15 16:47:21.20896 -0700 PDT m=+2525.142257209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-432000 -n first-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-432000 -n first-432000: exit status 7 (30.090209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-432000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-432000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-432000
--- FAIL: TestMinikubeProfile (10.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-140000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-140000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.052629791s)

                                                
                                                
-- stdout --
	* [mount-start-1-140000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-140000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-140000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-140000 -n mount-start-1-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-140000 -n mount-start-1-140000: exit status 7 (69.929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-700000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-700000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.869405875s)

                                                
                                                
-- stdout --
	* [multinode-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-700000" primary control-plane node in "multinode-700000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-700000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:47:31.667476    3342 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:47:31.667633    3342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:47:31.667637    3342 out.go:358] Setting ErrFile to fd 2...
	I0815 16:47:31.667639    3342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:47:31.667774    3342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:47:31.668855    3342 out.go:352] Setting JSON to false
	I0815 16:47:31.685649    3342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2819,"bootTime":1723762832,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:47:31.685721    3342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:47:31.692794    3342 out.go:177] * [multinode-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:47:31.702686    3342 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:47:31.702720    3342 notify.go:220] Checking for updates...
	I0815 16:47:31.708616    3342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:47:31.711667    3342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:47:31.714691    3342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:47:31.717683    3342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:47:31.720712    3342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:47:31.723967    3342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:47:31.728634    3342 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:47:31.735732    3342 start.go:297] selected driver: qemu2
	I0815 16:47:31.735740    3342 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:47:31.735748    3342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:47:31.738172    3342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:47:31.741640    3342 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:47:31.744794    3342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:47:31.744827    3342 cni.go:84] Creating CNI manager for ""
	I0815 16:47:31.744832    3342 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 16:47:31.744836    3342 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 16:47:31.744873    3342 start.go:340] cluster config:
	{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:47:31.748888    3342 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:47:31.756633    3342 out.go:177] * Starting "multinode-700000" primary control-plane node in "multinode-700000" cluster
	I0815 16:47:31.760655    3342 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:47:31.760669    3342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:47:31.760677    3342 cache.go:56] Caching tarball of preloaded images
	I0815 16:47:31.760733    3342 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:47:31.760739    3342 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:47:31.760970    3342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/multinode-700000/config.json ...
	I0815 16:47:31.760982    3342 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/multinode-700000/config.json: {Name:mk13964e937bbf88ce635b178bc3cb8dfdcd44a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:47:31.761205    3342 start.go:360] acquireMachinesLock for multinode-700000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:47:31.761241    3342 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "multinode-700000"
	I0815 16:47:31.761255    3342 start.go:93] Provisioning new machine with config: &{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:47:31.761281    3342 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:47:31.769675    3342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 16:47:31.788700    3342 start.go:159] libmachine.API.Create for "multinode-700000" (driver="qemu2")
	I0815 16:47:31.788729    3342 client.go:168] LocalClient.Create starting
	I0815 16:47:31.788792    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:47:31.788822    3342 main.go:141] libmachine: Decoding PEM data...
	I0815 16:47:31.788832    3342 main.go:141] libmachine: Parsing certificate...
	I0815 16:47:31.788875    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:47:31.788906    3342 main.go:141] libmachine: Decoding PEM data...
	I0815 16:47:31.788913    3342 main.go:141] libmachine: Parsing certificate...
	I0815 16:47:31.789275    3342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:47:31.939760    3342 main.go:141] libmachine: Creating SSH key...
	I0815 16:47:32.065983    3342 main.go:141] libmachine: Creating Disk image...
	I0815 16:47:32.065990    3342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:47:32.066191    3342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:47:32.075561    3342 main.go:141] libmachine: STDOUT: 
	I0815 16:47:32.075584    3342 main.go:141] libmachine: STDERR: 
	I0815 16:47:32.075636    3342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2 +20000M
	I0815 16:47:32.083694    3342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:47:32.083712    3342 main.go:141] libmachine: STDERR: 
	I0815 16:47:32.083724    3342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:47:32.083731    3342 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:47:32.083742    3342 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:47:32.083776    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6f:1f:4e:0b:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:47:32.085457    3342 main.go:141] libmachine: STDOUT: 
	I0815 16:47:32.085473    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:47:32.085497    3342 client.go:171] duration metric: took 296.758ms to LocalClient.Create
	I0815 16:47:34.087706    3342 start.go:128] duration metric: took 2.326376917s to createHost
	I0815 16:47:34.087767    3342 start.go:83] releasing machines lock for "multinode-700000", held for 2.326490458s
	W0815 16:47:34.087827    3342 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:47:34.105146    3342 out.go:177] * Deleting "multinode-700000" in qemu2 ...
	W0815 16:47:34.131334    3342 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:47:34.131371    3342 start.go:729] Will try again in 5 seconds ...
	I0815 16:47:39.131719    3342 start.go:360] acquireMachinesLock for multinode-700000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:47:39.132196    3342 start.go:364] duration metric: took 356.625µs to acquireMachinesLock for "multinode-700000"
	I0815 16:47:39.132345    3342 start.go:93] Provisioning new machine with config: &{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:47:39.132667    3342 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:47:39.145345    3342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 16:47:39.194663    3342 start.go:159] libmachine.API.Create for "multinode-700000" (driver="qemu2")
	I0815 16:47:39.194716    3342 client.go:168] LocalClient.Create starting
	I0815 16:47:39.194847    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:47:39.194904    3342 main.go:141] libmachine: Decoding PEM data...
	I0815 16:47:39.194923    3342 main.go:141] libmachine: Parsing certificate...
	I0815 16:47:39.194998    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:47:39.195043    3342 main.go:141] libmachine: Decoding PEM data...
	I0815 16:47:39.195057    3342 main.go:141] libmachine: Parsing certificate...
	I0815 16:47:39.195562    3342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:47:39.355583    3342 main.go:141] libmachine: Creating SSH key...
	I0815 16:47:39.444985    3342 main.go:141] libmachine: Creating Disk image...
	I0815 16:47:39.444993    3342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:47:39.445180    3342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:47:39.454452    3342 main.go:141] libmachine: STDOUT: 
	I0815 16:47:39.454468    3342 main.go:141] libmachine: STDERR: 
	I0815 16:47:39.454525    3342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2 +20000M
	I0815 16:47:39.462421    3342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:47:39.462433    3342 main.go:141] libmachine: STDERR: 
	I0815 16:47:39.462443    3342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:47:39.462446    3342 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:47:39.462457    3342 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:47:39.462488    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a8:f5:55:42:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:47:39.464089    3342 main.go:141] libmachine: STDOUT: 
	I0815 16:47:39.464103    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:47:39.464116    3342 client.go:171] duration metric: took 269.3915ms to LocalClient.Create
	I0815 16:47:41.466309    3342 start.go:128] duration metric: took 2.3335885s to createHost
	I0815 16:47:41.466363    3342 start.go:83] releasing machines lock for "multinode-700000", held for 2.334096166s
	W0815 16:47:41.466682    3342 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:47:41.477228    3342 out.go:201] 
	W0815 16:47:41.484350    3342 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:47:41.484379    3342 out.go:270] * 
	* 
	W0815 16:47:41.486883    3342 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:47:41.495269    3342 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-700000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (69.688667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (71.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.995833ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-700000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- rollout status deployment/busybox: exit status 1 (58.393917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.749792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.367958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.41975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.606667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.952917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.514334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.847041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.298125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.357334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.210416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.840667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.507958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.898458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.512833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.344042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (71.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-700000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.665542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.115792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-700000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-700000 -v 3 --alsologtostderr: exit status 83 (44.112958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-700000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-700000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:53.367162    3450 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:53.367323    3450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.367327    3450 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:53.367329    3450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.367446    3450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:53.367684    3450 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:53.367861    3450 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:53.373392    3450 out.go:177] * The control-plane node multinode-700000 host is not running: state=Stopped
	I0815 16:48:53.378295    3450 out.go:177]   To start a cluster, run: "minikube start -p multinode-700000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-700000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.852292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-700000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-700000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (30.993625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-700000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-700000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-700000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.63975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-700000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-700000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-700000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-700000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.371375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status --output json --alsologtostderr: exit status 7 (30.5705ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-700000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:53.582755    3462 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:53.582920    3462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.582923    3462 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:53.582925    3462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.583055    3462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:53.583181    3462 out.go:352] Setting JSON to true
	I0815 16:48:53.583192    3462 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:53.583232    3462 notify.go:220] Checking for updates...
	I0815 16:48:53.583420    3462 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:53.583425    3462 status.go:255] checking status of multinode-700000 ...
	I0815 16:48:53.583659    3462 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:48:53.583663    3462 status.go:343] host is not running, skipping remaining checks
	I0815 16:48:53.583665    3462 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-700000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (29.648959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 node stop m03: exit status 85 (46.357334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-700000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status: exit status 7 (30.312833ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr: exit status 7 (30.351167ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:53.720321    3470 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:53.720451    3470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.720454    3470 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:53.720457    3470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.720570    3470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:53.720686    3470 out.go:352] Setting JSON to false
	I0815 16:48:53.720700    3470 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:53.720760    3470 notify.go:220] Checking for updates...
	I0815 16:48:53.720895    3470 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:53.720906    3470 status.go:255] checking status of multinode-700000 ...
	I0815 16:48:53.721125    3470 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:48:53.721129    3470 status.go:343] host is not running, skipping remaining checks
	I0815 16:48:53.721131    3470 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr": multinode-700000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.642208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.182125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:53.781985    3474 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:53.782230    3474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.782236    3474 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:53.782239    3474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.782375    3474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:53.782602    3474 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:53.782777    3474 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:53.787341    3474 out.go:201] 
	W0815 16:48:53.790291    3474 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0815 16:48:53.790296    3474 out.go:270] * 
	* 
	W0815 16:48:53.791946    3474 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:48:53.795250    3474 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0815 16:48:53.781985    3474 out.go:345] Setting OutFile to fd 1 ...
I0815 16:48:53.782230    3474 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:48:53.782236    3474 out.go:358] Setting ErrFile to fd 2...
I0815 16:48:53.782239    3474 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:48:53.782375    3474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:48:53.782602    3474 mustload.go:65] Loading cluster: multinode-700000
I0815 16:48:53.782777    3474 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:48:53.787341    3474 out.go:201] 
W0815 16:48:53.790291    3474 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0815 16:48:53.790296    3474 out.go:270] * 
* 
W0815 16:48:53.791946    3474 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0815 16:48:53.795250    3474 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-700000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (30.940125ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:53.829547    3476 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:53.829689    3476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.829692    3476 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:53.829695    3476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:53.829835    3476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:53.829950    3476 out.go:352] Setting JSON to false
	I0815 16:48:53.829961    3476 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:53.830011    3476 notify.go:220] Checking for updates...
	I0815 16:48:53.830178    3476 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:53.830183    3476 status.go:255] checking status of multinode-700000 ...
	I0815 16:48:53.830404    3476 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:48:53.830407    3476 status.go:343] host is not running, skipping remaining checks
	I0815 16:48:53.830409    3476 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (73.283542ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:55.315428    3478 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:55.315622    3478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:55.315627    3478 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:55.315630    3478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:55.315811    3478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:55.315987    3478 out.go:352] Setting JSON to false
	I0815 16:48:55.316006    3478 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:55.316037    3478 notify.go:220] Checking for updates...
	I0815 16:48:55.316276    3478 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:55.316282    3478 status.go:255] checking status of multinode-700000 ...
	I0815 16:48:55.316563    3478 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:48:55.316568    3478 status.go:343] host is not running, skipping remaining checks
	I0815 16:48:55.316571    3478 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (73.205708ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:56.455043    3482 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:56.455268    3482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:56.455275    3482 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:56.455279    3482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:56.455437    3482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:56.455594    3482 out.go:352] Setting JSON to false
	I0815 16:48:56.455609    3482 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:56.455649    3482 notify.go:220] Checking for updates...
	I0815 16:48:56.455867    3482 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:56.455878    3482 status.go:255] checking status of multinode-700000 ...
	I0815 16:48:56.456152    3482 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:48:56.456157    3482 status.go:343] host is not running, skipping remaining checks
	I0815 16:48:56.456160    3482 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0815 16:48:57.020143    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (72.385ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:48:59.462518    3484 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:48:59.462755    3484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:59.462760    3484 out.go:358] Setting ErrFile to fd 2...
	I0815 16:48:59.462764    3484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:48:59.462963    3484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:48:59.463133    3484 out.go:352] Setting JSON to false
	I0815 16:48:59.463148    3484 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:48:59.463191    3484 notify.go:220] Checking for updates...
	I0815 16:48:59.463415    3484 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:48:59.463422    3484 status.go:255] checking status of multinode-700000 ...
	I0815 16:48:59.463704    3484 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:48:59.463709    3484 status.go:343] host is not running, skipping remaining checks
	I0815 16:48:59.463712    3484 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (71.692584ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:01.266781    3486 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:01.266987    3486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:01.266991    3486 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:01.266994    3486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:01.267156    3486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:01.267324    3486 out.go:352] Setting JSON to false
	I0815 16:49:01.267339    3486 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:01.267379    3486 notify.go:220] Checking for updates...
	I0815 16:49:01.267603    3486 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:01.267609    3486 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:01.267874    3486 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:01.267879    3486 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:01.267882    3486 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (72.485833ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:07.109841    3490 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:07.110050    3490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:07.110054    3490 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:07.110058    3490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:07.110275    3490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:07.110448    3490 out.go:352] Setting JSON to false
	I0815 16:49:07.110461    3490 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:07.110499    3490 notify.go:220] Checking for updates...
	I0815 16:49:07.110764    3490 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:07.110771    3490 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:07.111037    3490 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:07.111041    3490 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:07.111044    3490 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (74.40575ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:16.488851    3493 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:16.489063    3493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:16.489067    3493 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:16.489071    3493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:16.489219    3493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:16.489388    3493 out.go:352] Setting JSON to false
	I0815 16:49:16.489406    3493 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:16.489437    3493 notify.go:220] Checking for updates...
	I0815 16:49:16.489673    3493 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:16.489689    3493 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:16.489989    3493 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:16.489996    3493 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:16.490000    3493 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (73.949875ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:26.459665    3495 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:26.459862    3495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:26.459867    3495 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:26.459871    3495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:26.460038    3495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:26.460227    3495 out.go:352] Setting JSON to false
	I0815 16:49:26.460245    3495 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:26.460277    3495 notify.go:220] Checking for updates...
	I0815 16:49:26.460550    3495 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:26.460557    3495 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:26.460849    3495 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:26.460854    3495 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:26.460857    3495 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr: exit status 7 (73.60925ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:40.004726    3498 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:40.004944    3498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:40.004949    3498 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:40.004952    3498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:40.005134    3498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:40.005287    3498 out.go:352] Setting JSON to false
	I0815 16:49:40.005300    3498 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:40.005339    3498 notify.go:220] Checking for updates...
	I0815 16:49:40.005550    3498 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:40.005557    3498 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:40.005830    3498 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:40.005835    3498 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:40.005838    3498 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-700000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (34.084667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-700000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-700000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-700000: (3.750751417s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-700000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-700000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.214076041s)

                                                
                                                
-- stdout --
	* [multinode-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-700000" primary control-plane node in "multinode-700000" cluster
	* Restarting existing qemu2 VM for "multinode-700000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-700000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:43.881838    3524 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:43.881996    3524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:43.882000    3524 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:43.882003    3524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:43.882148    3524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:43.883342    3524 out.go:352] Setting JSON to false
	I0815 16:49:43.902186    3524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2951,"bootTime":1723762832,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:49:43.902252    3524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:49:43.906284    3524 out.go:177] * [multinode-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:49:43.913341    3524 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:49:43.913381    3524 notify.go:220] Checking for updates...
	I0815 16:49:43.920298    3524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:49:43.923308    3524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:49:43.926248    3524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:49:43.929242    3524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:49:43.932280    3524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:49:43.933803    3524 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:43.933860    3524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:49:43.938275    3524 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:49:43.945094    3524 start.go:297] selected driver: qemu2
	I0815 16:49:43.945100    3524 start.go:901] validating driver "qemu2" against &{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:49:43.945151    3524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:49:43.947466    3524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:49:43.947508    3524 cni.go:84] Creating CNI manager for ""
	I0815 16:49:43.947513    3524 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 16:49:43.947552    3524 start.go:340] cluster config:
	{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:49:43.951170    3524 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:49:43.958376    3524 out.go:177] * Starting "multinode-700000" primary control-plane node in "multinode-700000" cluster
	I0815 16:49:43.962273    3524 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:49:43.962292    3524 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:49:43.962304    3524 cache.go:56] Caching tarball of preloaded images
	I0815 16:49:43.962377    3524 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:49:43.962384    3524 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:49:43.962479    3524 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/multinode-700000/config.json ...
	I0815 16:49:43.962917    3524 start.go:360] acquireMachinesLock for multinode-700000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:49:43.962953    3524 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "multinode-700000"
	I0815 16:49:43.962964    3524 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:49:43.962970    3524 fix.go:54] fixHost starting: 
	I0815 16:49:43.963100    3524 fix.go:112] recreateIfNeeded on multinode-700000: state=Stopped err=<nil>
	W0815 16:49:43.963109    3524 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:49:43.971247    3524 out.go:177] * Restarting existing qemu2 VM for "multinode-700000" ...
	I0815 16:49:43.975263    3524 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:49:43.975310    3524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a8:f5:55:42:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:49:43.977398    3524 main.go:141] libmachine: STDOUT: 
	I0815 16:49:43.977417    3524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:49:43.977446    3524 fix.go:56] duration metric: took 14.476916ms for fixHost
	I0815 16:49:43.977449    3524 start.go:83] releasing machines lock for "multinode-700000", held for 14.491625ms
	W0815 16:49:43.977457    3524 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:49:43.977492    3524 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:49:43.977497    3524 start.go:729] Will try again in 5 seconds ...
	I0815 16:49:48.979503    3524 start.go:360] acquireMachinesLock for multinode-700000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:49:48.979884    3524 start.go:364] duration metric: took 257.209µs to acquireMachinesLock for "multinode-700000"
	I0815 16:49:48.980012    3524 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:49:48.980032    3524 fix.go:54] fixHost starting: 
	I0815 16:49:48.980679    3524 fix.go:112] recreateIfNeeded on multinode-700000: state=Stopped err=<nil>
	W0815 16:49:48.980710    3524 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:49:48.986247    3524 out.go:177] * Restarting existing qemu2 VM for "multinode-700000" ...
	I0815 16:49:48.993114    3524 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:49:48.993386    3524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a8:f5:55:42:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:49:49.002411    3524 main.go:141] libmachine: STDOUT: 
	I0815 16:49:49.002470    3524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:49:49.002536    3524 fix.go:56] duration metric: took 22.504542ms for fixHost
	I0815 16:49:49.002552    3524 start.go:83] releasing machines lock for "multinode-700000", held for 22.643625ms
	W0815 16:49:49.002728    3524 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-700000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-700000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:49:49.010088    3524 out.go:201] 
	W0815 16:49:49.014257    3524 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:49:49.014289    3524 out.go:270] * 
	* 
	W0815 16:49:49.017324    3524 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:49:49.024263    3524 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-700000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-700000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (32.188334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 node delete m03: exit status 83 (41.178958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-700000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-700000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-700000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr: exit status 7 (29.679125ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:49.208394    3538 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:49.208542    3538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:49.208545    3538 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:49.208547    3538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:49.208677    3538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:49.208800    3538 out.go:352] Setting JSON to false
	I0815 16:49:49.208813    3538 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:49.208857    3538 notify.go:220] Checking for updates...
	I0815 16:49:49.209012    3538 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:49.209017    3538 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:49.209216    3538 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:49.209220    3538 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:49.209222    3538 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.136917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-700000 stop: (3.561570833s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status: exit status 7 (62.976375ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr: exit status 7 (32.382833ms)

                                                
                                                
-- stdout --
	multinode-700000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:52.896048    3562 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:52.896202    3562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:52.896209    3562 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:52.896211    3562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:52.896368    3562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:52.896508    3562 out.go:352] Setting JSON to false
	I0815 16:49:52.896523    3562 mustload.go:65] Loading cluster: multinode-700000
	I0815 16:49:52.896574    3562 notify.go:220] Checking for updates...
	I0815 16:49:52.896726    3562 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:52.896731    3562 status.go:255] checking status of multinode-700000 ...
	I0815 16:49:52.896944    3562 status.go:330] multinode-700000 host status = "Stopped" (err=<nil>)
	I0815 16:49:52.896948    3562 status.go:343] host is not running, skipping remaining checks
	I0815 16:49:52.896953    3562 status.go:257] multinode-700000 status: &{Name:multinode-700000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr": multinode-700000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-700000 status --alsologtostderr": multinode-700000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (29.933917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-700000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-700000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.185754708s)

                                                
                                                
-- stdout --
	* [multinode-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-700000" primary control-plane node in "multinode-700000" cluster
	* Restarting existing qemu2 VM for "multinode-700000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-700000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:49:52.955998    3566 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:49:52.956132    3566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:52.956136    3566 out.go:358] Setting ErrFile to fd 2...
	I0815 16:49:52.956138    3566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:49:52.956280    3566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:49:52.957271    3566 out.go:352] Setting JSON to false
	I0815 16:49:52.973402    3566 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2960,"bootTime":1723762832,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:49:52.973484    3566 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:49:52.977326    3566 out.go:177] * [multinode-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:49:52.984208    3566 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:49:52.984251    3566 notify.go:220] Checking for updates...
	I0815 16:49:52.991117    3566 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:49:52.994199    3566 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:49:52.997225    3566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:49:53.000150    3566 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:49:53.003245    3566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:49:53.006602    3566 config.go:182] Loaded profile config "multinode-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:49:53.006855    3566 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:49:53.010134    3566 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:49:53.017213    3566 start.go:297] selected driver: qemu2
	I0815 16:49:53.017221    3566 start.go:901] validating driver "qemu2" against &{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:49:53.017290    3566 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:49:53.019568    3566 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:49:53.019607    3566 cni.go:84] Creating CNI manager for ""
	I0815 16:49:53.019611    3566 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 16:49:53.019659    3566 start.go:340] cluster config:
	{Name:multinode-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-700000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:49:53.023019    3566 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:49:53.030132    3566 out.go:177] * Starting "multinode-700000" primary control-plane node in "multinode-700000" cluster
	I0815 16:49:53.034253    3566 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:49:53.034272    3566 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:49:53.034280    3566 cache.go:56] Caching tarball of preloaded images
	I0815 16:49:53.034347    3566 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:49:53.034353    3566 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:49:53.034421    3566 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/multinode-700000/config.json ...
	I0815 16:49:53.034851    3566 start.go:360] acquireMachinesLock for multinode-700000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:49:53.034885    3566 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "multinode-700000"
	I0815 16:49:53.034895    3566 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:49:53.034902    3566 fix.go:54] fixHost starting: 
	I0815 16:49:53.035023    3566 fix.go:112] recreateIfNeeded on multinode-700000: state=Stopped err=<nil>
	W0815 16:49:53.035032    3566 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:49:53.039192    3566 out.go:177] * Restarting existing qemu2 VM for "multinode-700000" ...
	I0815 16:49:53.047166    3566 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:49:53.047203    3566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a8:f5:55:42:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:49:53.049382    3566 main.go:141] libmachine: STDOUT: 
	I0815 16:49:53.049408    3566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:49:53.049443    3566 fix.go:56] duration metric: took 14.541959ms for fixHost
	I0815 16:49:53.049447    3566 start.go:83] releasing machines lock for "multinode-700000", held for 14.557792ms
	W0815 16:49:53.049454    3566 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:49:53.049494    3566 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:49:53.049501    3566 start.go:729] Will try again in 5 seconds ...
	I0815 16:49:58.051480    3566 start.go:360] acquireMachinesLock for multinode-700000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:49:58.051986    3566 start.go:364] duration metric: took 422.084µs to acquireMachinesLock for "multinode-700000"
	I0815 16:49:58.052126    3566 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:49:58.052178    3566 fix.go:54] fixHost starting: 
	I0815 16:49:58.052865    3566 fix.go:112] recreateIfNeeded on multinode-700000: state=Stopped err=<nil>
	W0815 16:49:58.052897    3566 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:49:58.057434    3566 out.go:177] * Restarting existing qemu2 VM for "multinode-700000" ...
	I0815 16:49:58.064347    3566 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:49:58.064526    3566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a8:f5:55:42:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/multinode-700000/disk.qcow2
	I0815 16:49:58.073993    3566 main.go:141] libmachine: STDOUT: 
	I0815 16:49:58.074080    3566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:49:58.074170    3566 fix.go:56] duration metric: took 22.022875ms for fixHost
	I0815 16:49:58.074191    3566 start.go:83] releasing machines lock for "multinode-700000", held for 22.180375ms
	W0815 16:49:58.074413    3566 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-700000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-700000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:49:58.081328    3566 out.go:201] 
	W0815 16:49:58.085434    3566 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:49:58.085589    3566 out.go:270] * 
	* 
	W0815 16:49:58.088065    3566 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:49:58.100354    3566 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-700000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (68.514875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-700000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-700000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-700000-m01 --driver=qemu2 : exit status 80 (9.875806208s)

                                                
                                                
-- stdout --
	* [multinode-700000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-700000-m01" primary control-plane node in "multinode-700000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-700000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-700000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-700000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-700000-m02 --driver=qemu2 : exit status 80 (9.957983125s)

                                                
                                                
-- stdout --
	* [multinode-700000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-700000-m02" primary control-plane node in "multinode-700000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-700000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-700000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-700000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-700000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-700000: exit status 83 (81.05975ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-700000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-700000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-700000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-700000 -n multinode-700000: exit status 7 (30.91025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                    
x
+
TestPreload (10s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-104000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-104000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.849815834s)

                                                
                                                
-- stdout --
	* [test-preload-104000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-104000" primary control-plane node in "test-preload-104000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-104000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:50:18.383927    3622 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:50:18.384051    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:50:18.384054    3622 out.go:358] Setting ErrFile to fd 2...
	I0815 16:50:18.384060    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:50:18.384201    3622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:50:18.385260    3622 out.go:352] Setting JSON to false
	I0815 16:50:18.401365    3622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2986,"bootTime":1723762832,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:50:18.401435    3622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:50:18.406119    3622 out.go:177] * [test-preload-104000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:50:18.414016    3622 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:50:18.414054    3622 notify.go:220] Checking for updates...
	I0815 16:50:18.422056    3622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:50:18.425034    3622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:50:18.428030    3622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:50:18.431031    3622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:50:18.433927    3622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:50:18.437430    3622 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:50:18.437478    3622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:50:18.441970    3622 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:50:18.449131    3622 start.go:297] selected driver: qemu2
	I0815 16:50:18.449139    3622 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:50:18.449147    3622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:50:18.451477    3622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:50:18.454091    3622 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:50:18.455583    3622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:50:18.455613    3622 cni.go:84] Creating CNI manager for ""
	I0815 16:50:18.455621    3622 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:50:18.455629    3622 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:50:18.455674    3622 start.go:340] cluster config:
	{Name:test-preload-104000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-104000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:50:18.459360    3622 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.467108    3622 out.go:177] * Starting "test-preload-104000" primary control-plane node in "test-preload-104000" cluster
	I0815 16:50:18.470975    3622 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0815 16:50:18.471061    3622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/test-preload-104000/config.json ...
	I0815 16:50:18.471063    3622 cache.go:107] acquiring lock: {Name:mk254cd4493f2ec7ab8ef3c645b00096320b362d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471078    3622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/test-preload-104000/config.json: {Name:mk1712a577a1a6e4a470f1fd5c45849e6a7d12df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:50:18.471072    3622 cache.go:107] acquiring lock: {Name:mk5ebd5d9fabf0d0ad1dd23fa899fc4d8a6c6372 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471085    3622 cache.go:107] acquiring lock: {Name:mk6b167c17519af545dbdca485206ffc482dc325 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471095    3622 cache.go:107] acquiring lock: {Name:mkdecf8b178aab2c7eb32ab1a3249e33ed978329 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471112    3622 cache.go:107] acquiring lock: {Name:mkf4b4879162f23239dfeb82f80647ee15eae239 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471293    3622 cache.go:107] acquiring lock: {Name:mk375a1c0bd396db559d61e111e07b6305856c17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471317    3622 cache.go:107] acquiring lock: {Name:mkc2d74456d4524139d583c87f87959811ee620b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471318    3622 cache.go:107] acquiring lock: {Name:mk19e477a3cdf495d2c5015ab8fcae01cef39bb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:50:18.471437    3622 start.go:360] acquireMachinesLock for test-preload-104000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:50:18.471535    3622 start.go:364] duration metric: took 86.542µs to acquireMachinesLock for "test-preload-104000"
	I0815 16:50:18.471532    3622 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 16:50:18.471556    3622 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:50:18.471584    3622 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 16:50:18.471609    3622 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 16:50:18.471551    3622 start.go:93] Provisioning new machine with config: &{Name:test-preload-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-104000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:50:18.471630    3622 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:50:18.471641    3622 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:50:18.471698    3622 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 16:50:18.471680    3622 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:50:18.471689    3622 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 16:50:18.476060    3622 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 16:50:18.484072    3622 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 16:50:18.484181    3622 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 16:50:18.484192    3622 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:50:18.484215    3622 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 16:50:18.484225    3622 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 16:50:18.484927    3622 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 16:50:18.484928    3622 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:50:18.485011    3622 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:50:18.493994    3622 start.go:159] libmachine.API.Create for "test-preload-104000" (driver="qemu2")
	I0815 16:50:18.494038    3622 client.go:168] LocalClient.Create starting
	I0815 16:50:18.494139    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:50:18.494172    3622 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:18.494189    3622 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:18.494228    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:50:18.494251    3622 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:18.494259    3622 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:18.494608    3622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:50:18.644989    3622 main.go:141] libmachine: Creating SSH key...
	I0815 16:50:18.714227    3622 main.go:141] libmachine: Creating Disk image...
	I0815 16:50:18.714247    3622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:50:18.714483    3622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2
	I0815 16:50:18.724427    3622 main.go:141] libmachine: STDOUT: 
	I0815 16:50:18.724453    3622 main.go:141] libmachine: STDERR: 
	I0815 16:50:18.724512    3622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2 +20000M
	I0815 16:50:18.733379    3622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:50:18.733399    3622 main.go:141] libmachine: STDERR: 
	I0815 16:50:18.733415    3622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2
	I0815 16:50:18.733419    3622 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:50:18.733436    3622 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:50:18.733469    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:45:0c:88:21:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2
	I0815 16:50:18.735425    3622 main.go:141] libmachine: STDOUT: 
	I0815 16:50:18.735455    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:50:18.735474    3622 client.go:171] duration metric: took 241.420666ms to LocalClient.Create
	I0815 16:50:18.977535    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0815 16:50:18.978374    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0815 16:50:18.992583    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0815 16:50:18.997484    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0815 16:50:19.017457    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0815 16:50:19.019902    3622 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 16:50:19.019934    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 16:50:19.023282    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0815 16:50:19.143318    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0815 16:50:19.143381    3622 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 672.255ms
	I0815 16:50:19.143418    3622 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0815 16:50:19.699199    3622 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 16:50:19.699303    3622 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 16:50:20.014707    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 16:50:20.014743    3622 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.543651625s
	I0815 16:50:20.014827    3622 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 16:50:20.735737    3622 start.go:128] duration metric: took 2.264045s to createHost
	I0815 16:50:20.735789    3622 start.go:83] releasing machines lock for "test-preload-104000", held for 2.264217167s
	W0815 16:50:20.735890    3622 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:50:20.747084    3622 out.go:177] * Deleting "test-preload-104000" in qemu2 ...
	W0815 16:50:20.776037    3622 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:50:20.776066    3622 start.go:729] Will try again in 5 seconds ...
	I0815 16:50:21.694985    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0815 16:50:21.695028    3622 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.223909459s
	I0815 16:50:21.695052    3622 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0815 16:50:21.825410    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0815 16:50:21.825453    3622 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.354154458s
	I0815 16:50:21.825479    3622 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0815 16:50:22.453786    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0815 16:50:22.453841    3622 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.982697125s
	I0815 16:50:22.453867    3622 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0815 16:50:23.753452    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0815 16:50:23.753498    3622 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.282372916s
	I0815 16:50:23.753521    3622 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0815 16:50:25.691591    3622 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0815 16:50:25.691642    3622 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.220341417s
	I0815 16:50:25.691673    3622 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0815 16:50:25.776349    3622 start.go:360] acquireMachinesLock for test-preload-104000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:50:25.776719    3622 start.go:364] duration metric: took 305.833µs to acquireMachinesLock for "test-preload-104000"
	I0815 16:50:25.776819    3622 start.go:93] Provisioning new machine with config: &{Name:test-preload-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-104000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:50:25.777073    3622 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:50:25.787625    3622 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 16:50:25.840032    3622 start.go:159] libmachine.API.Create for "test-preload-104000" (driver="qemu2")
	I0815 16:50:25.840086    3622 client.go:168] LocalClient.Create starting
	I0815 16:50:25.840205    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:50:25.840278    3622 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:25.840296    3622 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:25.840355    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:50:25.840400    3622 main.go:141] libmachine: Decoding PEM data...
	I0815 16:50:25.840415    3622 main.go:141] libmachine: Parsing certificate...
	I0815 16:50:25.840942    3622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:50:26.004937    3622 main.go:141] libmachine: Creating SSH key...
	I0815 16:50:26.145344    3622 main.go:141] libmachine: Creating Disk image...
	I0815 16:50:26.145351    3622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:50:26.145570    3622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2
	I0815 16:50:26.155180    3622 main.go:141] libmachine: STDOUT: 
	I0815 16:50:26.155201    3622 main.go:141] libmachine: STDERR: 
	I0815 16:50:26.155248    3622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2 +20000M
	I0815 16:50:26.163357    3622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:50:26.163371    3622 main.go:141] libmachine: STDERR: 
	I0815 16:50:26.163386    3622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2
	I0815 16:50:26.163389    3622 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:50:26.163405    3622 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:50:26.163449    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:1d:68:c8:ad:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/test-preload-104000/disk.qcow2
	I0815 16:50:26.165179    3622 main.go:141] libmachine: STDOUT: 
	I0815 16:50:26.165197    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:50:26.165213    3622 client.go:171] duration metric: took 325.117417ms to LocalClient.Create
	I0815 16:50:28.165490    3622 start.go:128] duration metric: took 2.388319791s to createHost
	I0815 16:50:28.165561    3622 start.go:83] releasing machines lock for "test-preload-104000", held for 2.388791042s
	W0815 16:50:28.165870    3622 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-104000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-104000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:50:28.179418    3622 out.go:201] 
	W0815 16:50:28.183472    3622 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:50:28.183501    3622 out.go:270] * 
	* 
	W0815 16:50:28.186212    3622 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:50:28.193324    3622 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-104000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-15 16:50:28.208652 -0700 PDT m=+2712.139890334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-104000 -n test-preload-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-104000 -n test-preload-104000: exit status 7 (68.488125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-104000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-104000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-104000
--- FAIL: TestPreload (10.00s)

                                                
                                    
x
+
TestScheduledStopUnix (9.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-990000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-990000 --memory=2048 --driver=qemu2 : exit status 80 (9.7978035s)

                                                
                                                
-- stdout --
	* [scheduled-stop-990000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-990000" primary control-plane node in "scheduled-stop-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-990000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-990000" primary control-plane node in "scheduled-stop-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-15 16:50:38.154608 -0700 PDT m=+2722.085737209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-990000 -n scheduled-stop-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-990000 -n scheduled-stop-990000: exit status 7 (70.950458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-990000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-990000
--- FAIL: TestScheduledStopUnix (9.95s)

                                                
                                    
x
+
TestSkaffold (13.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4098674966 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4098674966 version: (1.052624417s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-468000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-468000 --memory=2600 --driver=qemu2 : exit status 80 (9.921416583s)

                                                
                                                
-- stdout --
	* [skaffold-468000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-468000" primary control-plane node in "skaffold-468000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-468000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-468000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-468000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-468000" primary control-plane node in "skaffold-468000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-468000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-468000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-15 16:50:51.384931 -0700 PDT m=+2735.315914709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-468000 -n skaffold-468000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-468000 -n skaffold-468000: exit status 7 (62.704583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-468000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-468000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-468000
--- FAIL: TestSkaffold (13.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (589.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.641877419 start -p running-upgrade-853000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.641877419 start -p running-upgrade-853000 --memory=2200 --vm-driver=qemu2 : (54.303593917s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-853000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0815 16:53:56.730329    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:53:57.023495    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-853000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.048819959s)

                                                
                                                
-- stdout --
	* [running-upgrade-853000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-853000" primary control-plane node in "running-upgrade-853000" cluster
	* Updating the running qemu2 "running-upgrade-853000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:52:28.788316    4006 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:52:28.788452    4006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:52:28.788460    4006 out.go:358] Setting ErrFile to fd 2...
	I0815 16:52:28.788463    4006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:52:28.788587    4006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:52:28.789602    4006 out.go:352] Setting JSON to false
	I0815 16:52:28.806905    4006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3116,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:52:28.807018    4006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:52:28.810470    4006 out.go:177] * [running-upgrade-853000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:52:28.817603    4006 notify.go:220] Checking for updates...
	I0815 16:52:28.820523    4006 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:52:28.829403    4006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:52:28.837451    4006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:52:28.840451    4006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:52:28.843472    4006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:52:28.846489    4006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:52:28.848134    4006 config.go:182] Loaded profile config "running-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:52:28.851461    4006 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 16:52:28.854500    4006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:52:28.858407    4006 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:52:28.865469    4006 start.go:297] selected driver: qemu2
	I0815 16:52:28.865474    4006 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:52:28.865520    4006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:52:28.867855    4006 cni.go:84] Creating CNI manager for ""
	I0815 16:52:28.867874    4006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:52:28.867908    4006 start.go:340] cluster config:
	{Name:running-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:52:28.867962    4006 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:52:28.875498    4006 out.go:177] * Starting "running-upgrade-853000" primary control-plane node in "running-upgrade-853000" cluster
	I0815 16:52:28.879423    4006 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 16:52:28.879436    4006 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0815 16:52:28.879441    4006 cache.go:56] Caching tarball of preloaded images
	I0815 16:52:28.879486    4006 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:52:28.879491    4006 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0815 16:52:28.879538    4006 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/config.json ...
	I0815 16:52:28.879965    4006 start.go:360] acquireMachinesLock for running-upgrade-853000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:52:28.879992    4006 start.go:364] duration metric: took 20.958µs to acquireMachinesLock for "running-upgrade-853000"
	I0815 16:52:28.880001    4006 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:52:28.880006    4006 fix.go:54] fixHost starting: 
	I0815 16:52:28.880596    4006 fix.go:112] recreateIfNeeded on running-upgrade-853000: state=Running err=<nil>
	W0815 16:52:28.880606    4006 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:52:28.884501    4006 out.go:177] * Updating the running qemu2 "running-upgrade-853000" VM ...
	I0815 16:52:28.892398    4006 machine.go:93] provisionDockerMachine start ...
	I0815 16:52:28.892435    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:28.892539    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:28.892543    4006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:52:28.953663    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-853000
	
	I0815 16:52:28.953679    4006 buildroot.go:166] provisioning hostname "running-upgrade-853000"
	I0815 16:52:28.953723    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:28.953841    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:28.953847    4006 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-853000 && echo "running-upgrade-853000" | sudo tee /etc/hostname
	I0815 16:52:29.017303    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-853000
	
	I0815 16:52:29.017355    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:29.017470    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:29.017479    4006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-853000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-853000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-853000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:52:29.076568    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:52:29.076581    4006 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-964/.minikube}
	I0815 16:52:29.076590    4006 buildroot.go:174] setting up certificates
	I0815 16:52:29.076597    4006 provision.go:84] configureAuth start
	I0815 16:52:29.076602    4006 provision.go:143] copyHostCerts
	I0815 16:52:29.076677    4006 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem, removing ...
	I0815 16:52:29.076683    4006 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem
	I0815 16:52:29.076820    4006 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem (1123 bytes)
	I0815 16:52:29.076999    4006 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem, removing ...
	I0815 16:52:29.077003    4006 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem
	I0815 16:52:29.077048    4006 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem (1679 bytes)
	I0815 16:52:29.077143    4006 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem, removing ...
	I0815 16:52:29.077146    4006 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem
	I0815 16:52:29.077187    4006 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem (1082 bytes)
	I0815 16:52:29.077277    4006 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-853000 san=[127.0.0.1 localhost minikube running-upgrade-853000]
	I0815 16:52:29.134936    4006 provision.go:177] copyRemoteCerts
	I0815 16:52:29.134986    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:52:29.134996    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:52:29.166952    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:52:29.174599    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 16:52:29.181109    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:52:29.188244    4006 provision.go:87] duration metric: took 111.641167ms to configureAuth
	I0815 16:52:29.188254    4006 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:52:29.188358    4006 config.go:182] Loaded profile config "running-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:52:29.188390    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:29.188480    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:29.188485    4006 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:52:29.249310    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:52:29.249319    4006 buildroot.go:70] root file system type: tmpfs
	I0815 16:52:29.249370    4006 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:52:29.249413    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:29.249519    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:29.249551    4006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:52:29.314164    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:52:29.314229    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:29.314364    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:29.314372    4006 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:52:29.375214    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:52:29.375226    4006 machine.go:96] duration metric: took 482.816833ms to provisionDockerMachine
	I0815 16:52:29.375231    4006 start.go:293] postStartSetup for "running-upgrade-853000" (driver="qemu2")
	I0815 16:52:29.375237    4006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:52:29.375289    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:52:29.375297    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:52:29.407374    4006 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:52:29.408730    4006 info.go:137] Remote host: Buildroot 2021.02.12
	I0815 16:52:29.408740    4006 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/addons for local assets ...
	I0815 16:52:29.408812    4006 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/files for local assets ...
	I0815 16:52:29.408912    4006 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem -> 14462.pem in /etc/ssl/certs
	I0815 16:52:29.409002    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:52:29.412452    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:52:29.424530    4006 start.go:296] duration metric: took 49.291042ms for postStartSetup
	I0815 16:52:29.424549    4006 fix.go:56] duration metric: took 544.538125ms for fixHost
	I0815 16:52:29.424594    4006 main.go:141] libmachine: Using SSH client type: native
	I0815 16:52:29.424723    4006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ec05a0] 0x102ec2e00 <nil>  [] 0s} localhost 50225 <nil> <nil>}
	I0815 16:52:29.424727    4006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:52:29.485376    4006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723765949.950622971
	
	I0815 16:52:29.485384    4006 fix.go:216] guest clock: 1723765949.950622971
	I0815 16:52:29.485388    4006 fix.go:229] Guest: 2024-08-15 16:52:29.950622971 -0700 PDT Remote: 2024-08-15 16:52:29.42455 -0700 PDT m=+0.656518043 (delta=526.072971ms)
	I0815 16:52:29.485401    4006 fix.go:200] guest clock delta is within tolerance: 526.072971ms
	I0815 16:52:29.485404    4006 start.go:83] releasing machines lock for "running-upgrade-853000", held for 605.401125ms
	I0815 16:52:29.485463    4006 ssh_runner.go:195] Run: cat /version.json
	I0815 16:52:29.485465    4006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:52:29.485472    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:52:29.485483    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	W0815 16:52:29.486036    4006 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50225: connect: connection refused
	I0815 16:52:29.486056    4006 retry.go:31] will retry after 260.935219ms: dial tcp [::1]:50225: connect: connection refused
	W0815 16:52:29.781721    4006 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0815 16:52:29.781807    4006 ssh_runner.go:195] Run: systemctl --version
	I0815 16:52:29.783673    4006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:52:29.785488    4006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:52:29.785514    4006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0815 16:52:29.788625    4006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0815 16:52:29.793002    4006 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:52:29.793011    4006 start.go:495] detecting cgroup driver to use...
	I0815 16:52:29.793074    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:52:29.798410    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0815 16:52:29.801329    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:52:29.804712    4006 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:52:29.804738    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:52:29.808235    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:52:29.811475    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:52:29.814318    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:52:29.817435    4006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:52:29.820317    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:52:29.824132    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:52:29.827333    4006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:52:29.830671    4006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:52:29.833304    4006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:52:29.836398    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:52:29.930043    4006 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:52:29.938786    4006 start.go:495] detecting cgroup driver to use...
	I0815 16:52:29.938856    4006 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:52:29.944294    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:52:29.949845    4006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:52:29.957606    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:52:29.962720    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:52:29.967396    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:52:29.973529    4006 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:52:29.974776    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:52:29.977216    4006 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0815 16:52:29.981705    4006 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:52:30.073151    4006 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:52:30.161099    4006 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:52:30.161160    4006 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:52:30.166445    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:52:30.253174    4006 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:52:33.226335    4006 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9731115s)
	I0815 16:52:33.226422    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:52:33.231248    4006 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:52:33.237283    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:52:33.241977    4006 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:52:33.334979    4006 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:52:33.416356    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:52:33.495344    4006 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:52:33.501054    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:52:33.505771    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:52:33.576405    4006 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:52:33.618899    4006 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:52:33.618987    4006 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:52:33.620921    4006 start.go:563] Will wait 60s for crictl version
	I0815 16:52:33.620974    4006 ssh_runner.go:195] Run: which crictl
	I0815 16:52:33.622502    4006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:52:33.634247    4006 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0815 16:52:33.634329    4006 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:52:33.651823    4006 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:52:33.674123    4006 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0815 16:52:33.674250    4006 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0815 16:52:33.675518    4006 kubeadm.go:883] updating cluster {Name:running-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0815 16:52:33.675563    4006 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 16:52:33.675604    4006 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:52:33.686659    4006 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 16:52:33.686667    4006 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 16:52:33.686714    4006 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 16:52:33.689670    4006 ssh_runner.go:195] Run: which lz4
	I0815 16:52:33.690914    4006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 16:52:33.692127    4006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 16:52:33.692138    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0815 16:52:34.680126    4006 docker.go:649] duration metric: took 989.233042ms to copy over tarball
	I0815 16:52:34.680184    4006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 16:52:35.802801    4006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1225915s)
	I0815 16:52:35.802823    4006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 16:52:35.818256    4006 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 16:52:35.821099    4006 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0815 16:52:35.826221    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:52:35.908740    4006 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:52:37.106643    4006 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.19787225s)
	I0815 16:52:37.106734    4006 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:52:37.119296    4006 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 16:52:37.119306    4006 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 16:52:37.119310    4006 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 16:52:37.124500    4006 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:52:37.126547    4006 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:52:37.128608    4006 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:52:37.128676    4006 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:52:37.130514    4006 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:52:37.130527    4006 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:52:37.132037    4006 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:52:37.132052    4006 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:52:37.133112    4006 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:52:37.133229    4006 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:52:37.134568    4006 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 16:52:37.134687    4006 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:52:37.135638    4006 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:52:37.135760    4006 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:52:37.136751    4006 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 16:52:37.137318    4006 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:52:37.574731    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:52:37.578893    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:52:37.590101    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:52:37.592038    4006 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0815 16:52:37.592062    4006 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:52:37.592101    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:52:37.599400    4006 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0815 16:52:37.599422    4006 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:52:37.599484    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:52:37.605460    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0815 16:52:37.609577    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:52:37.611600    4006 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0815 16:52:37.611617    4006 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:52:37.611621    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0815 16:52:37.611652    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:52:37.612845    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 16:52:37.628779    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0815 16:52:37.640038    4006 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0815 16:52:37.640047    4006 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0815 16:52:37.640059    4006 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0815 16:52:37.640059    4006 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:52:37.640121    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0815 16:52:37.640167    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:52:37.643460    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0815 16:52:37.643488    4006 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0815 16:52:37.643503    4006 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:52:37.643544    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0815 16:52:37.648425    4006 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 16:52:37.648579    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:52:37.664147    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0815 16:52:37.664155    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0815 16:52:37.664213    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0815 16:52:37.664268    4006 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0815 16:52:37.667100    4006 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0815 16:52:37.667110    4006 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0815 16:52:37.667117    4006 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:52:37.667127    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0815 16:52:37.667156    4006 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:52:37.681041    4006 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 16:52:37.681055    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0815 16:52:37.684323    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 16:52:37.684448    4006 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 16:52:37.710102    4006 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0815 16:52:37.710124    4006 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0815 16:52:37.710146    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0815 16:52:37.733997    4006 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 16:52:37.734115    4006 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:52:37.762407    4006 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0815 16:52:37.762438    4006 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:52:37.762491    4006 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:52:37.765294    4006 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 16:52:37.765302    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0815 16:52:37.780175    4006 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 16:52:37.780306    4006 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 16:52:37.814960    4006 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0815 16:52:37.814994    4006 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0815 16:52:37.815021    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0815 16:52:37.845275    4006 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 16:52:37.845292    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0815 16:52:38.077253    4006 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 16:52:38.077295    4006 cache_images.go:92] duration metric: took 957.968042ms to LoadCachedImages
	W0815 16:52:38.077337    4006 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0815 16:52:38.077345    4006 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0815 16:52:38.077391    4006 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-853000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:52:38.077467    4006 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:52:38.090922    4006 cni.go:84] Creating CNI manager for ""
	I0815 16:52:38.090934    4006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:52:38.090939    4006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:52:38.090947    4006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-853000 NodeName:running-upgrade-853000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:52:38.091004    4006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-853000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:52:38.091060    4006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0815 16:52:38.094122    4006 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:52:38.094154    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 16:52:38.096903    4006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0815 16:52:38.102143    4006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:52:38.107095    4006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0815 16:52:38.112597    4006 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0815 16:52:38.114238    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:52:38.199040    4006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:52:38.204667    4006 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000 for IP: 10.0.2.15
	I0815 16:52:38.204673    4006 certs.go:194] generating shared ca certs ...
	I0815 16:52:38.204682    4006 certs.go:226] acquiring lock for ca certs: {Name:mk1fa67494d9857cf8e0d98ec65576a15d2cd3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:52:38.204827    4006 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key
	I0815 16:52:38.204866    4006 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key
	I0815 16:52:38.204871    4006 certs.go:256] generating profile certs ...
	I0815 16:52:38.204932    4006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.key
	I0815 16:52:38.204953    4006 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.key.0f2e981b
	I0815 16:52:38.204962    4006 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.crt.0f2e981b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0815 16:52:38.287439    4006 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.crt.0f2e981b ...
	I0815 16:52:38.287447    4006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.crt.0f2e981b: {Name:mkf4c22cf00d32eddcec4706a19179d7c8652cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:52:38.287724    4006 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.key.0f2e981b ...
	I0815 16:52:38.287728    4006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.key.0f2e981b: {Name:mk6374af14d1fa912d0cf6882a96becf4f0140f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:52:38.287850    4006 certs.go:381] copying /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.crt.0f2e981b -> /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.crt
	I0815 16:52:38.288030    4006 certs.go:385] copying /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.key.0f2e981b -> /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.key
	I0815 16:52:38.288192    4006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/proxy-client.key
	I0815 16:52:38.288315    4006 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem (1338 bytes)
	W0815 16:52:38.288337    4006 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446_empty.pem, impossibly tiny 0 bytes
	I0815 16:52:38.288342    4006 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 16:52:38.288360    4006 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:52:38.288380    4006 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:52:38.288398    4006 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem (1679 bytes)
	I0815 16:52:38.288440    4006 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:52:38.288750    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:52:38.296614    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 16:52:38.304011    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:52:38.311675    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 16:52:38.319351    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 16:52:38.326989    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:52:38.334203    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:52:38.340956    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 16:52:38.348379    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem --> /usr/share/ca-certificates/1446.pem (1338 bytes)
	I0815 16:52:38.356026    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /usr/share/ca-certificates/14462.pem (1708 bytes)
	I0815 16:52:38.363382    4006 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:52:38.370321    4006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:52:38.375352    4006 ssh_runner.go:195] Run: openssl version
	I0815 16:52:38.377359    4006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1446.pem && ln -fs /usr/share/ca-certificates/1446.pem /etc/ssl/certs/1446.pem"
	I0815 16:52:38.380666    4006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1446.pem
	I0815 16:52:38.382400    4006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:13 /usr/share/ca-certificates/1446.pem
	I0815 16:52:38.382423    4006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1446.pem
	I0815 16:52:38.384199    4006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1446.pem /etc/ssl/certs/51391683.0"
	I0815 16:52:38.387186    4006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14462.pem && ln -fs /usr/share/ca-certificates/14462.pem /etc/ssl/certs/14462.pem"
	I0815 16:52:38.390198    4006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14462.pem
	I0815 16:52:38.391787    4006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:13 /usr/share/ca-certificates/14462.pem
	I0815 16:52:38.391804    4006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14462.pem
	I0815 16:52:38.393890    4006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14462.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:52:38.396825    4006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:52:38.400437    4006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:52:38.402065    4006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:52:38.402087    4006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:52:38.403929    4006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:52:38.406821    4006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:52:38.408394    4006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:52:38.410258    4006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:52:38.412122    4006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:52:38.414040    4006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:52:38.415910    4006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:52:38.417664    4006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:52:38.419556    4006 kubeadm.go:392] StartCluster: {Name:running-upgrade-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50257 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:52:38.419621    4006 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:52:38.430019    4006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:52:38.433192    4006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:52:38.433197    4006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:52:38.433218    4006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:52:38.436529    4006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:52:38.436771    4006 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-853000" does not appear in /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:52:38.436826    4006 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-964/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-853000" cluster setting kubeconfig missing "running-upgrade-853000" context setting]
	I0815 16:52:38.436959    4006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:52:38.437625    4006 kapi.go:59] client config for running-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104479610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:52:38.437931    4006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:52:38.440756    4006 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-853000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0815 16:52:38.440761    4006 kubeadm.go:1160] stopping kube-system containers ...
	I0815 16:52:38.440799    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:52:38.451386    4006 docker.go:483] Stopping containers: [dab91603b778 40e3d80cb4a8 9bb15813a91e 309324e5ad47 b2768a0d890b e794e6c79e18 837354ea8de4 133bf0de67aa 939e94e6f10f 3f7b118d192b a48156e9e720 740b1824648d 5bdf86dbdd8e]
	I0815 16:52:38.451459    4006 ssh_runner.go:195] Run: docker stop dab91603b778 40e3d80cb4a8 9bb15813a91e 309324e5ad47 b2768a0d890b e794e6c79e18 837354ea8de4 133bf0de67aa 939e94e6f10f 3f7b118d192b a48156e9e720 740b1824648d 5bdf86dbdd8e
	I0815 16:52:38.462778    4006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 16:52:38.559787    4006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 16:52:38.564459    4006 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 15 23:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 15 23:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 15 23:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 15 23:52 /etc/kubernetes/scheduler.conf
	
	I0815 16:52:38.564490    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/admin.conf
	I0815 16:52:38.568309    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:52:38.568341    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 16:52:38.572299    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/kubelet.conf
	I0815 16:52:38.575679    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:52:38.575717    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 16:52:38.578949    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/controller-manager.conf
	I0815 16:52:38.581688    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:52:38.581714    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 16:52:38.584712    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/scheduler.conf
	I0815 16:52:38.587844    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:52:38.587868    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 16:52:38.590892    4006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 16:52:38.593752    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:52:38.615651    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:52:38.996566    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:52:39.187608    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:52:39.209759    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:52:39.226605    4006 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:52:39.226692    4006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:52:39.728863    4006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:52:40.228785    4006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:52:40.233238    4006 api_server.go:72] duration metric: took 1.006623584s to wait for apiserver process to appear ...
	I0815 16:52:40.233247    4006 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:52:40.233261    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:52:45.235470    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:52:45.235545    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:52:50.236191    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:52:50.236263    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:52:55.237167    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:52:55.237250    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:00.238529    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:00.238610    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:05.240183    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:05.240270    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:10.242222    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:10.242299    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:15.244998    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:15.245083    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:20.247896    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:20.247986    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:25.250716    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:25.250806    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:30.252547    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:30.252743    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:35.255508    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:35.255590    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:40.257185    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:40.257459    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:53:40.287003    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:53:40.287131    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:53:40.304310    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:53:40.304403    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:53:40.317062    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:53:40.317128    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:53:40.328430    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:53:40.328520    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:53:40.341440    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:53:40.341512    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:53:40.352157    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:53:40.352232    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:53:40.362085    4006 logs.go:276] 0 containers: []
	W0815 16:53:40.362095    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:53:40.362144    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:53:40.372451    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:53:40.372473    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:53:40.372477    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:53:40.391960    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:53:40.391970    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:53:40.465268    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:53:40.465281    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:53:40.480633    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:53:40.480645    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:53:40.492335    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:53:40.492345    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:53:40.503943    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:53:40.503956    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:53:40.515492    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:53:40.515504    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:53:40.519986    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:53:40.519992    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:53:40.539564    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:53:40.539574    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:53:40.555113    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:53:40.555126    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:53:40.573633    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:53:40.573647    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:53:40.588749    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:53:40.588761    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:53:40.628369    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:53:40.628379    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:53:40.639798    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:53:40.639810    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:53:40.666304    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:53:40.666312    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:53:40.685092    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:53:40.685102    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:53:40.704035    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:53:40.704047    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:53:43.219132    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:48.221968    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:48.222414    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:53:48.262172    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:53:48.262315    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:53:48.284390    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:53:48.284525    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:53:48.299178    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:53:48.299259    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:53:48.311636    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:53:48.311711    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:53:48.324755    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:53:48.324826    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:53:48.337528    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:53:48.337588    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:53:48.348138    4006 logs.go:276] 0 containers: []
	W0815 16:53:48.348149    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:53:48.348202    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:53:48.358941    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:53:48.358960    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:53:48.358966    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:53:48.370765    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:53:48.370775    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:53:48.375295    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:53:48.375302    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:53:48.389905    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:53:48.389917    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:53:48.405194    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:53:48.405204    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:53:48.425000    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:53:48.425013    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:53:48.437247    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:53:48.437258    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:53:48.448859    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:53:48.448867    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:53:48.475480    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:53:48.475488    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:53:48.487234    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:53:48.487248    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:53:48.499610    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:53:48.499622    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:53:48.540739    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:53:48.540746    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:53:48.576719    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:53:48.576733    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:53:48.596332    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:53:48.596342    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:53:48.610239    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:53:48.610248    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:53:48.627793    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:53:48.627806    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:53:48.642194    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:53:48.642203    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:53:51.155948    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:53:56.158712    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:53:56.159150    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:53:56.200388    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:53:56.200528    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:53:56.221152    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:53:56.221244    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:53:56.236082    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:53:56.236155    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:53:56.248400    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:53:56.248471    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:53:56.259023    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:53:56.259090    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:53:56.269390    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:53:56.269457    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:53:56.279735    4006 logs.go:276] 0 containers: []
	W0815 16:53:56.279747    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:53:56.279803    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:53:56.291073    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:53:56.291089    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:53:56.291095    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:53:56.309511    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:53:56.309521    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:53:56.321901    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:53:56.321913    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:53:56.360989    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:53:56.360998    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:53:56.375372    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:53:56.375386    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:53:56.390410    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:53:56.390422    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:53:56.405811    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:53:56.405822    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:53:56.418435    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:53:56.418447    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:53:56.459313    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:53:56.459327    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:53:56.463610    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:53:56.463618    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:53:56.482886    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:53:56.482895    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:53:56.500983    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:53:56.500994    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:53:56.525901    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:53:56.525908    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:53:56.539393    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:53:56.539403    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:53:56.550182    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:53:56.550194    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:53:56.561666    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:53:56.561679    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:53:56.573124    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:53:56.573137    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:53:59.086290    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:04.089200    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:04.089696    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:04.128384    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:04.128518    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:04.150730    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:04.150842    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:04.165888    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:04.165953    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:04.180283    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:04.180353    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:04.191254    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:04.191314    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:04.202198    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:04.202261    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:04.212161    4006 logs.go:276] 0 containers: []
	W0815 16:54:04.212171    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:04.212226    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:04.222155    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:04.222171    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:04.222176    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:04.263628    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:04.263635    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:04.277353    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:04.277366    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:04.301832    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:04.301846    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:04.313546    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:04.313558    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:04.325133    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:04.325146    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:04.336744    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:04.336756    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:04.341276    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:04.341285    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:04.362952    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:04.362963    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:04.382113    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:04.382122    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:04.419755    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:04.419767    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:04.431424    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:04.431434    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:04.442376    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:04.442386    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:04.468539    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:04.468553    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:04.486840    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:04.486852    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:04.505731    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:04.505745    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:04.517057    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:04.517071    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:07.032879    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:12.035108    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:12.035580    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:12.076581    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:12.076720    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:12.098783    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:12.098901    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:12.114052    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:12.114128    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:12.126993    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:12.127068    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:12.138123    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:12.138193    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:12.148766    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:12.148839    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:12.160301    4006 logs.go:276] 0 containers: []
	W0815 16:54:12.160311    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:12.160364    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:12.170776    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:12.170794    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:12.170800    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:12.175067    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:12.175076    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:12.186964    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:12.186973    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:12.198103    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:12.198115    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:12.240629    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:12.240641    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:12.260018    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:12.260029    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:12.271944    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:12.271959    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:12.283538    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:12.283554    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:12.326124    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:12.326142    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:12.340544    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:12.340560    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:12.351738    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:12.351753    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:12.365810    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:12.365824    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:12.377422    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:12.377431    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:12.394823    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:12.394834    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:12.412283    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:12.412295    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:12.437098    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:12.437107    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:12.463613    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:12.463619    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:14.977358    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:19.979788    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:19.980225    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:20.018539    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:20.018684    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:20.038728    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:20.038833    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:20.053459    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:20.053539    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:20.067734    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:20.067799    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:20.077954    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:20.078022    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:20.088812    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:20.088886    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:20.099208    4006 logs.go:276] 0 containers: []
	W0815 16:54:20.099217    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:20.099274    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:20.109108    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:20.109125    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:20.109131    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:20.148829    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:20.148836    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:20.165980    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:20.165993    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:20.180461    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:20.180476    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:20.192461    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:20.192470    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:20.209851    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:20.209861    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:20.221401    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:20.221416    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:20.225937    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:20.225942    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:20.260818    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:20.260828    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:20.280781    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:20.280793    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:20.294588    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:20.294601    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:20.305938    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:20.305951    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:20.317218    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:20.317229    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:20.330570    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:20.330584    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:20.344857    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:20.344868    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:20.358920    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:20.358934    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:20.370395    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:20.370405    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:22.897913    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:27.900743    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:27.901092    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:27.935562    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:27.935685    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:27.954935    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:27.955018    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:27.969673    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:27.969747    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:27.989956    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:27.990004    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:28.001327    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:28.001403    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:28.012229    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:28.012289    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:28.023070    4006 logs.go:276] 0 containers: []
	W0815 16:54:28.023085    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:28.023132    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:28.033730    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:28.033745    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:28.033749    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:28.045182    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:28.045190    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:28.058895    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:28.058904    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:28.096431    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:28.096441    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:28.107527    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:28.107538    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:28.119269    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:28.119281    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:28.137098    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:28.137108    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:28.162992    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:28.162999    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:28.167353    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:28.167358    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:28.184788    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:28.184798    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:28.198483    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:28.198493    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:28.212275    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:28.212285    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:28.224038    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:28.224047    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:28.234991    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:28.235001    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:28.246163    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:28.246172    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:28.285550    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:28.285558    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:28.299002    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:28.299012    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:30.821507    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:35.824024    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:35.824363    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:35.854054    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:35.854147    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:35.876520    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:35.876594    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:35.889869    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:35.889949    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:35.902205    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:35.902278    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:35.912679    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:35.912732    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:35.923330    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:35.923382    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:35.933192    4006 logs.go:276] 0 containers: []
	W0815 16:54:35.933204    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:35.933258    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:35.943778    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:35.943795    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:35.943801    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:35.948202    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:35.948212    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:35.970230    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:35.970240    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:35.984166    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:35.984179    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:35.995275    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:35.995288    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:36.007536    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:36.007548    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:36.047236    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:36.047247    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:36.081826    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:36.081836    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:36.096567    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:36.096578    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:36.107853    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:36.107865    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:36.119413    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:36.119424    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:36.131446    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:36.131459    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:36.148693    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:36.148705    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:36.165800    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:36.165811    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:36.178134    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:36.178147    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:36.192358    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:36.192371    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:36.203822    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:36.203833    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:38.731704    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:43.734294    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:43.734721    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:43.775454    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:43.775609    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:43.797171    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:43.797271    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:43.812881    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:43.812957    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:43.825069    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:43.825136    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:43.840172    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:43.840229    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:43.852877    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:43.852938    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:43.863893    4006 logs.go:276] 0 containers: []
	W0815 16:54:43.863909    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:43.863961    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:43.874570    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:43.874586    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:43.874591    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:43.892415    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:43.892425    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:43.916024    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:43.916030    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:43.954518    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:43.954528    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:43.959239    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:43.959247    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:43.980620    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:43.980631    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:43.991971    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:43.991982    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:44.003666    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:44.003675    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:44.016099    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:44.016112    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:44.046150    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:44.046162    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:44.061925    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:44.061937    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:44.073762    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:44.073774    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:44.108807    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:44.108816    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:44.131381    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:44.131392    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:44.145103    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:44.145117    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:44.162592    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:44.162601    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:44.174394    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:44.174406    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:46.692807    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:51.694214    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:51.694670    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:51.736219    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:51.736348    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:51.759140    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:51.759254    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:51.776926    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:51.777009    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:51.791554    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:51.791621    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:51.808954    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:51.809034    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:51.821604    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:51.821673    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:51.831532    4006 logs.go:276] 0 containers: []
	W0815 16:54:51.831544    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:51.831596    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:51.842646    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:51.842663    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:54:51.842668    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:54:51.857469    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:51.857480    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:54:51.869678    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:51.869687    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:51.903498    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:51.903512    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:51.917806    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:54:51.917817    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:54:51.929440    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:51.929451    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:51.946623    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:51.946633    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:51.970138    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:51.970149    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:52.009005    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:54:52.009013    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:54:52.028440    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:52.028451    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:52.039841    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:52.039855    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:52.051248    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:54:52.051262    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:54:52.062456    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:52.062467    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:52.086988    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:52.086996    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:52.091776    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:52.091784    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:52.109387    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:54:52.109398    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:54:52.121411    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:54:52.121419    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:54:54.637845    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:54:59.640253    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:54:59.640706    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:54:59.694302    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:54:59.694430    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:54:59.711805    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:54:59.711884    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:54:59.725551    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:54:59.725625    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:54:59.739160    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:54:59.739236    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:54:59.749563    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:54:59.749635    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:54:59.760727    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:54:59.760794    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:54:59.771921    4006 logs.go:276] 0 containers: []
	W0815 16:54:59.771934    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:54:59.771983    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:54:59.793463    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:54:59.793486    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:54:59.793493    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:54:59.805004    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:54:59.805017    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:54:59.816253    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:54:59.816267    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:54:59.837017    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:54:59.837030    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:54:59.855492    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:54:59.855502    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:54:59.867243    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:54:59.867256    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:54:59.893673    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:54:59.893681    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:54:59.935701    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:54:59.935707    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:54:59.940361    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:54:59.940367    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:54:59.975853    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:54:59.975867    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:54:59.990762    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:54:59.990776    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:00.002556    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:00.002566    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:00.017683    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:00.017695    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:00.029445    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:00.029457    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:00.046094    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:00.046106    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:00.066007    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:00.066021    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:00.077910    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:00.077922    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:02.592131    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:07.594655    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:07.595071    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:07.642241    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:07.642379    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:07.663060    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:07.663162    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:07.677144    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:07.677206    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:07.688957    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:07.689019    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:07.699449    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:07.699509    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:07.710377    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:07.710432    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:07.720893    4006 logs.go:276] 0 containers: []
	W0815 16:55:07.720906    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:07.720964    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:07.731836    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:07.731852    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:07.731858    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:07.743589    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:07.743599    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:07.748531    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:07.748539    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:07.784408    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:07.784423    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:07.804821    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:07.804832    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:07.816783    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:07.816795    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:07.828496    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:07.828509    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:07.840654    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:07.840665    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:07.852526    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:07.852539    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:07.863666    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:07.863678    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:07.888789    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:07.888799    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:07.904967    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:07.904979    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:07.922798    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:07.922815    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:07.964486    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:07.964496    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:07.978958    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:07.978970    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:07.997005    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:07.997015    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:08.008795    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:08.008806    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:10.526281    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:15.528501    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:15.528681    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:15.547142    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:15.547232    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:15.562278    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:15.562366    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:15.575411    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:15.575504    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:15.587799    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:15.587867    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:15.600761    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:15.600851    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:15.612959    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:15.613035    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:15.624861    4006 logs.go:276] 0 containers: []
	W0815 16:55:15.624875    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:15.624943    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:15.636719    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:15.636742    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:15.636749    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:15.680440    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:15.680459    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:15.695523    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:15.695538    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:15.714474    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:15.714489    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:15.730802    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:15.730813    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:15.753194    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:15.753210    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:15.773619    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:15.773634    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:15.786422    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:15.786434    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:15.802383    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:15.802397    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:15.815795    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:15.815807    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:15.841304    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:15.841318    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:15.846319    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:15.846330    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:15.891001    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:15.891016    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:15.907041    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:15.907055    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:15.920350    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:15.920363    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:15.933723    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:15.933737    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:15.947113    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:15.947124    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:18.464215    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:23.466991    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:23.467116    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:23.480506    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:23.480576    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:23.491051    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:23.491122    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:23.501304    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:23.501367    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:23.511549    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:23.511621    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:23.521925    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:23.521994    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:23.536589    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:23.536672    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:23.546572    4006 logs.go:276] 0 containers: []
	W0815 16:55:23.546585    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:23.546637    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:23.556758    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:23.556774    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:23.556779    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:23.576336    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:23.576349    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:23.595365    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:23.595376    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:23.619510    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:23.619524    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:23.632060    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:23.632078    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:23.655223    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:23.655233    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:23.667350    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:23.667362    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:23.679403    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:23.679416    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:23.721210    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:23.721223    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:23.758205    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:23.758219    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:23.774116    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:23.774126    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:23.792297    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:23.792308    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:23.804185    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:23.804196    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:23.819189    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:23.819200    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:23.824247    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:23.824255    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:23.836747    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:23.836760    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:23.855487    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:23.855498    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:26.376463    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:31.378765    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:31.378943    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:31.395294    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:31.395383    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:31.409281    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:31.409367    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:31.422764    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:31.422837    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:31.433494    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:31.433561    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:31.444100    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:31.444170    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:31.454601    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:31.454683    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:31.464616    4006 logs.go:276] 0 containers: []
	W0815 16:55:31.464627    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:31.464683    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:31.475439    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:31.475460    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:31.475465    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:31.515657    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:31.515666    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:31.529512    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:31.529525    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:31.541052    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:31.541062    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:31.552674    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:31.552685    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:31.564289    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:31.564299    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:31.577999    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:31.578010    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:31.589626    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:31.589636    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:31.607327    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:31.607335    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:31.612259    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:31.612269    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:31.648163    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:31.648175    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:31.663343    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:31.663353    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:31.680696    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:31.680707    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:31.705312    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:31.705320    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:31.724714    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:31.724727    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:31.736266    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:31.736277    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:31.748298    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:31.748308    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:34.262506    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:39.265275    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:39.265384    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:39.276733    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:39.276805    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:39.292113    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:39.292190    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:39.309016    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:39.309089    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:39.319951    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:39.320022    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:39.333100    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:39.333174    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:39.346420    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:39.346505    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:39.359395    4006 logs.go:276] 0 containers: []
	W0815 16:55:39.359407    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:39.359469    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:39.370453    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:39.370471    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:39.370476    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:39.383487    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:39.383498    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:39.407975    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:39.407989    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:39.422845    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:39.422857    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:39.427067    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:39.427075    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:39.444626    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:39.444639    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:39.471071    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:39.471085    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:39.483491    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:39.483503    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:39.496823    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:39.496835    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:39.509888    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:39.509900    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:39.552949    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:39.552964    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:39.571706    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:39.571719    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:39.583979    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:39.583991    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:39.606730    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:39.606744    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:39.642469    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:39.642484    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:39.656790    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:39.656805    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:39.675072    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:39.675090    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:42.190442    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:47.193290    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:47.193503    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:47.207635    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:47.207717    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:47.219338    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:47.219411    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:47.233385    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:47.233466    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:47.244216    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:47.244291    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:47.255172    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:47.255235    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:47.266061    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:47.266131    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:47.276705    4006 logs.go:276] 0 containers: []
	W0815 16:55:47.276716    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:47.276776    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:47.288035    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:47.288053    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:47.288058    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:47.299815    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:47.299828    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:47.315272    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:47.315282    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:47.326875    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:47.326888    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:47.349296    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:47.349308    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:47.368270    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:47.368281    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:47.387140    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:47.387150    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:47.400665    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:47.400677    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:47.419288    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:47.419299    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:47.430680    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:47.430690    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:47.444485    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:47.444498    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:47.455465    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:47.455479    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:47.479539    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:47.479552    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:47.518954    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:47.518962    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:47.523333    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:47.523341    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:47.558574    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:47.558584    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:47.570601    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:47.570612    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:50.094932    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:55.097878    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:55.098278    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:55.141099    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:55.141231    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:55.160811    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:55.160903    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:55.175249    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:55.175329    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:55.187292    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:55.187360    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:55.197958    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:55.198026    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:55.209001    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:55.209069    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:55.219717    4006 logs.go:276] 0 containers: []
	W0815 16:55:55.219731    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:55.219787    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:55.230309    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:55.230329    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:55.230334    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:55.244380    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:55.244393    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:55.260289    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:55.260304    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:55.275552    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:55.275564    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:55.299392    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:55.299406    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:55.304051    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:55.304060    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:55.318087    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:55.318100    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:55.336383    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:55.336395    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:55.347870    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:55.347885    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:55.366624    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:55.366636    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:55.379834    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:55.379848    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:55.391217    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:55.391230    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:55.434122    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:55.434136    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:55.447960    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:55.447972    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:55.487536    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:55.487546    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:55.504513    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:55.504524    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:55.516570    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:55.516579    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:58.036304    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:03.039024    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:03.039258    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:03.051296    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:03.051374    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:03.066962    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:03.067038    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:03.077174    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:03.077241    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:03.087822    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:03.087898    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:03.098878    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:03.098943    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:03.109622    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:03.109685    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:03.120579    4006 logs.go:276] 0 containers: []
	W0815 16:56:03.120591    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:03.120649    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:03.131782    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:03.131800    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:03.131805    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:03.136656    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:03.136662    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:03.158858    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:03.158872    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:03.176281    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:03.176291    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:03.189580    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:03.189590    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:03.201619    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:03.201632    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:03.243650    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:03.243657    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:03.257497    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:03.257506    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:03.276869    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:03.276879    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:03.291222    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:03.291235    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:03.303001    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:03.303013    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:03.327244    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:03.327255    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:03.364536    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:03.364556    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:03.382859    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:03.382872    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:03.396449    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:03.396460    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:03.412586    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:03.412599    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:03.450118    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:03.450129    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:05.966208    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:10.968757    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:10.969188    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:11.011212    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:11.011361    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:11.031754    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:11.031859    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:11.054898    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:11.054970    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:11.067231    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:11.067308    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:11.077552    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:11.077621    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:11.088222    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:11.088303    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:11.098699    4006 logs.go:276] 0 containers: []
	W0815 16:56:11.098712    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:11.098770    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:11.109597    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:11.109615    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:11.109621    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:11.148954    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:11.148962    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:11.166523    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:11.166536    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:11.177627    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:11.177640    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:11.189277    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:11.189287    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:11.203948    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:11.203958    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:11.215551    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:11.215562    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:11.230727    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:11.230737    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:11.242095    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:11.242107    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:11.265165    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:11.265171    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:11.277003    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:11.277014    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:11.281766    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:11.281771    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:11.296220    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:11.296233    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:11.313790    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:11.313803    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:11.333337    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:11.333346    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:11.368227    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:11.368236    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:11.388698    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:11.388712    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:13.902232    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:18.904582    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:18.904700    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:18.916680    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:18.916761    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:18.929962    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:18.930054    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:18.944733    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:18.944832    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:18.956853    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:18.956924    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:18.973011    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:18.973084    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:18.991716    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:18.991792    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:19.008048    4006 logs.go:276] 0 containers: []
	W0815 16:56:19.008061    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:19.008135    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:19.020064    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:19.020084    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:19.020089    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:19.061263    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:19.061277    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:19.076109    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:19.076124    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:19.103271    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:19.103295    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:19.108574    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:19.108587    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:19.124691    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:19.124709    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:19.140729    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:19.140742    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:19.157597    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:19.157611    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:19.170079    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:19.170093    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:19.195997    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:19.196011    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:19.211663    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:19.211681    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:19.228777    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:19.228792    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:19.243408    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:19.243420    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:19.256476    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:19.256489    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:19.303236    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:19.303262    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:19.325375    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:19.325394    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:19.338466    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:19.338477    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:21.859782    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:26.861009    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:26.861127    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:26.872643    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:26.872723    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:26.884015    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:26.884083    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:26.895206    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:26.895284    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:26.907137    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:26.907211    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:26.918955    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:26.919027    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:26.934353    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:26.934426    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:26.945784    4006 logs.go:276] 0 containers: []
	W0815 16:56:26.945800    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:26.945864    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:26.960096    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:26.960118    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:26.960123    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:27.001035    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:27.001050    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:27.021576    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:27.021593    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:27.035959    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:27.035970    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:27.047826    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:27.047837    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:27.064340    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:27.064351    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:27.082600    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:27.082615    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:27.094451    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:27.094461    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:27.106405    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:27.106417    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:27.110843    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:27.110850    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:27.146722    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:27.146738    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:27.161611    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:27.161622    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:27.176012    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:27.176023    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:27.190351    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:27.190362    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:27.208499    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:27.208509    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:27.220311    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:27.220327    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:27.245459    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:27.245484    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:29.760651    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:34.763032    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:34.763276    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:34.787502    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:34.787618    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:34.803668    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:34.803766    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:34.818259    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:34.818330    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:34.829553    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:34.829613    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:34.864991    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:34.865072    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:34.877426    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:34.877493    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:34.888934    4006 logs.go:276] 0 containers: []
	W0815 16:56:34.888951    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:34.889035    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:34.899841    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:34.899857    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:34.899862    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:34.911545    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:34.911559    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:34.926113    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:34.926124    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:34.937965    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:34.937976    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:34.952301    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:34.952314    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:34.969908    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:34.969919    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:34.993019    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:34.993029    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:35.034845    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:35.034860    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:35.052384    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:35.052394    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:35.063879    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:35.063890    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:35.077398    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:35.077412    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:35.111416    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:35.111428    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:35.126439    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:35.126452    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:35.145871    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:35.145884    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:35.157762    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:35.157774    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:35.170193    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:35.170204    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:35.182099    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:35.182112    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:37.687726    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:42.690103    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:42.690181    4006 kubeadm.go:597] duration metric: took 4m4.254291083s to restartPrimaryControlPlane
	W0815 16:56:42.690219    4006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 16:56:42.690238    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0815 16:56:43.709900    4006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.019640125s)
	I0815 16:56:43.709964    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:56:43.715003    4006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 16:56:43.717995    4006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 16:56:43.720845    4006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 16:56:43.720851    4006 kubeadm.go:157] found existing configuration files:
	
	I0815 16:56:43.720871    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/admin.conf
	I0815 16:56:43.723518    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 16:56:43.723545    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 16:56:43.726405    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/kubelet.conf
	I0815 16:56:43.729599    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 16:56:43.729624    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 16:56:43.733119    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/controller-manager.conf
	I0815 16:56:43.736057    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 16:56:43.736081    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 16:56:43.738725    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/scheduler.conf
	I0815 16:56:43.741503    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 16:56:43.741527    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 16:56:43.744581    4006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 16:56:43.762526    4006 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0815 16:56:43.762568    4006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 16:56:43.814945    4006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 16:56:43.814997    4006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 16:56:43.815056    4006 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 16:56:43.867727    4006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 16:56:43.871884    4006 out.go:235]   - Generating certificates and keys ...
	I0815 16:56:43.871946    4006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 16:56:43.872018    4006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 16:56:43.872066    4006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 16:56:43.872167    4006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 16:56:43.872314    4006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 16:56:43.872376    4006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 16:56:43.872441    4006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 16:56:43.872482    4006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 16:56:43.872599    4006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 16:56:43.872671    4006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 16:56:43.872732    4006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 16:56:43.872774    4006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 16:56:43.994908    4006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 16:56:44.114740    4006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 16:56:44.155223    4006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 16:56:44.205468    4006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 16:56:44.237123    4006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 16:56:44.237454    4006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 16:56:44.237508    4006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 16:56:44.330009    4006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 16:56:44.334632    4006 out.go:235]   - Booting up control plane ...
	I0815 16:56:44.334675    4006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 16:56:44.334712    4006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 16:56:44.334751    4006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 16:56:44.334794    4006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 16:56:44.339727    4006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 16:56:48.342401    4006 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002324 seconds
	I0815 16:56:48.342584    4006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 16:56:48.346913    4006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 16:56:48.855926    4006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 16:56:48.856831    4006 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-853000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 16:56:49.362216    4006 kubeadm.go:310] [bootstrap-token] Using token: 1q2sqo.90ak6svcf6z91vtn
	I0815 16:56:49.368591    4006 out.go:235]   - Configuring RBAC rules ...
	I0815 16:56:49.368668    4006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 16:56:49.368764    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 16:56:49.374029    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 16:56:49.375366    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 16:56:49.376569    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 16:56:49.377700    4006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 16:56:49.382010    4006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 16:56:49.553224    4006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 16:56:49.767949    4006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 16:56:49.768343    4006 kubeadm.go:310] 
	I0815 16:56:49.768374    4006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 16:56:49.768379    4006 kubeadm.go:310] 
	I0815 16:56:49.768416    4006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 16:56:49.768420    4006 kubeadm.go:310] 
	I0815 16:56:49.768433    4006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 16:56:49.768478    4006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 16:56:49.768510    4006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 16:56:49.768512    4006 kubeadm.go:310] 
	I0815 16:56:49.768546    4006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 16:56:49.768552    4006 kubeadm.go:310] 
	I0815 16:56:49.768588    4006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 16:56:49.768594    4006 kubeadm.go:310] 
	I0815 16:56:49.768633    4006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 16:56:49.768683    4006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 16:56:49.768759    4006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 16:56:49.768776    4006 kubeadm.go:310] 
	I0815 16:56:49.768972    4006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 16:56:49.769123    4006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 16:56:49.769128    4006 kubeadm.go:310] 
	I0815 16:56:49.769173    4006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1q2sqo.90ak6svcf6z91vtn \
	I0815 16:56:49.769260    4006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e \
	I0815 16:56:49.769273    4006 kubeadm.go:310] 	--control-plane 
	I0815 16:56:49.769276    4006 kubeadm.go:310] 
	I0815 16:56:49.769318    4006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 16:56:49.769324    4006 kubeadm.go:310] 
	I0815 16:56:49.769389    4006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1q2sqo.90ak6svcf6z91vtn \
	I0815 16:56:49.769469    4006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e 
	I0815 16:56:49.769575    4006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 16:56:49.769585    4006 cni.go:84] Creating CNI manager for ""
	I0815 16:56:49.769594    4006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:56:49.772494    4006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 16:56:49.778459    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 16:56:49.783095    4006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 16:56:49.787899    4006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 16:56:49.787974    4006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 16:56:49.787975    4006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-853000 minikube.k8s.io/updated_at=2024_08_15T16_56_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=running-upgrade-853000 minikube.k8s.io/primary=true
	I0815 16:56:49.791125    4006 ops.go:34] apiserver oom_adj: -16
	I0815 16:56:49.837298    4006 kubeadm.go:1113] duration metric: took 49.3635ms to wait for elevateKubeSystemPrivileges
	I0815 16:56:49.837312    4006 kubeadm.go:394] duration metric: took 4m11.414990583s to StartCluster
	I0815 16:56:49.837322    4006 settings.go:142] acquiring lock: {Name:mk3ef55eecb064d007fbd1b55ea891b5b51acd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:49.837408    4006 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:56:49.837778    4006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:49.837971    4006 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:56:49.838064    4006 config.go:182] Loaded profile config "running-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:56:49.838056    4006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:56:49.838095    4006 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-853000"
	I0815 16:56:49.838102    4006 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-853000"
	I0815 16:56:49.838107    4006 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-853000"
	W0815 16:56:49.838111    4006 addons.go:243] addon storage-provisioner should already be in state true
	I0815 16:56:49.838115    4006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-853000"
	I0815 16:56:49.838123    4006 host.go:66] Checking if "running-upgrade-853000" exists ...
	I0815 16:56:49.839096    4006 kapi.go:59] client config for running-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104479610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:56:49.839229    4006 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-853000"
	W0815 16:56:49.839234    4006 addons.go:243] addon default-storageclass should already be in state true
	I0815 16:56:49.839241    4006 host.go:66] Checking if "running-upgrade-853000" exists ...
	I0815 16:56:49.842529    4006 out.go:177] * Verifying Kubernetes components...
	I0815 16:56:49.842861    4006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 16:56:49.846573    4006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 16:56:49.846579    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:56:49.850443    4006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:49.853387    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:49.857453    4006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 16:56:49.857460    4006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 16:56:49.857466    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:56:49.945537    4006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:56:49.951070    4006 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:56:49.951113    4006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:49.955067    4006 api_server.go:72] duration metric: took 117.082958ms to wait for apiserver process to appear ...
	I0815 16:56:49.955077    4006 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:56:49.955083    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:49.986617    4006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 16:56:50.009687    4006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 16:56:50.328715    4006 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 16:56:50.328726    4006 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 16:56:54.957341    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:54.957420    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:59.958074    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:59.958100    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:04.958561    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:04.958586    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:09.959147    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:09.959187    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:14.959914    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:14.959939    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:19.960631    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:19.960646    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0815 16:57:20.331497    4006 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0815 16:57:20.340653    4006 out.go:177] * Enabled addons: storage-provisioner
	I0815 16:57:20.347745    4006 addons.go:510] duration metric: took 30.50935275s for enable addons: enabled=[storage-provisioner]
	I0815 16:57:24.961694    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:24.961721    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:29.963087    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:29.963128    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:34.964798    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:34.964840    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:39.965731    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:39.965780    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:44.968187    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:44.968236    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:49.970588    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:49.970789    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:50.009767    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:57:50.009846    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:50.024665    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:57:50.024738    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:50.035056    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:57:50.035125    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:50.045122    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:57:50.045198    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:50.055238    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:57:50.055317    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:50.071194    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:57:50.071263    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:50.081638    4006 logs.go:276] 0 containers: []
	W0815 16:57:50.081648    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:50.081710    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:50.092593    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:57:50.092608    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:57:50.092613    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:50.104044    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:50.104057    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:50.108509    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:57:50.108517    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:57:50.123033    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:57:50.123046    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:57:50.134907    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:57:50.134920    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:57:50.153055    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:57:50.153065    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:57:50.164750    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:50.164760    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:50.187645    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:50.187653    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:50.220465    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:50.220472    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:50.256722    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:57:50.256734    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:57:50.270758    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:57:50.270769    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:57:50.286277    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:57:50.286289    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:57:50.301678    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:57:50.301692    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:57:52.816132    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:57.818786    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:57.818952    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:57.829803    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:57:57.829880    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:57.840282    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:57:57.840353    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:57.850766    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:57:57.850836    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:57.861292    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:57:57.861354    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:57.871776    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:57:57.871842    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:57.882701    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:57:57.882771    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:57.893666    4006 logs.go:276] 0 containers: []
	W0815 16:57:57.893675    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:57.893728    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:57.904068    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:57:57.904094    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:57.904099    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:57.937797    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:57.937806    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:57.974922    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:57:57.974933    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:57:57.989575    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:57:57.989586    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:57:58.006605    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:57:58.006616    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:58.018454    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:57:58.018471    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:57:58.030344    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:57:58.030357    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:57:58.041783    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:58.041794    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:58.064744    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:58.064755    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:58.068904    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:57:58.068912    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:57:58.083114    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:57:58.083128    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:57:58.094887    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:57:58.094901    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:57:58.106991    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:57:58.107005    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:00.624376    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:05.626826    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:05.626947    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:05.638939    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:05.639019    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:05.653826    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:05.653900    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:05.664536    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:05.664604    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:05.677209    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:05.677271    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:05.687606    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:05.687673    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:05.698052    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:05.698119    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:05.707785    4006 logs.go:276] 0 containers: []
	W0815 16:58:05.707797    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:05.707853    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:05.718724    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:05.718739    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:05.718744    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:05.754279    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:05.754288    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:05.759177    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:05.759186    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:05.794227    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:05.794241    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:05.809385    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:05.809396    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:05.829560    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:05.829575    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:05.841127    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:05.841141    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:05.863880    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:05.863888    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:05.878214    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:05.878231    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:05.892492    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:05.892507    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:05.904386    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:05.904396    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:05.916341    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:05.916351    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:05.928250    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:05.928261    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:08.441500    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:13.443818    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:13.444040    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:13.471997    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:13.472101    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:13.487872    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:13.487948    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:13.501443    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:13.501515    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:13.512949    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:13.513020    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:13.523100    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:13.523170    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:13.533720    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:13.533798    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:13.544106    4006 logs.go:276] 0 containers: []
	W0815 16:58:13.544117    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:13.544173    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:13.554129    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:13.554145    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:13.554149    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:13.565931    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:13.565942    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:13.580690    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:13.580705    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:13.598773    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:13.598782    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:13.603814    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:13.603824    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:13.616135    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:13.616145    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:13.631274    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:13.631285    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:13.646016    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:13.646027    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:13.657789    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:13.657800    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:13.670066    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:13.670077    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:13.695518    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:13.695527    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:13.707294    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:13.707306    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:13.742162    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:13.742175    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:16.279230    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:21.281711    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:21.281997    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:21.310675    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:21.310784    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:21.332488    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:21.332583    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:21.346116    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:21.346195    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:21.358109    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:21.358178    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:21.368291    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:21.368363    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:21.379093    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:21.379168    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:21.389610    4006 logs.go:276] 0 containers: []
	W0815 16:58:21.389620    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:21.389679    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:21.399756    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:21.399769    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:21.399774    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:21.434086    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:21.434096    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:21.448206    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:21.448220    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:21.460122    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:21.460136    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:21.474784    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:21.474798    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:21.486761    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:21.486775    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:21.512192    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:21.512203    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:21.516698    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:21.516705    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:21.554072    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:21.554082    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:21.568201    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:21.568214    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:21.580075    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:21.580084    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:21.597987    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:21.598000    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:21.609535    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:21.609546    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:24.125032    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:29.127467    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:29.127628    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:29.138891    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:29.138966    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:29.149416    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:29.149486    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:29.165353    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:29.165419    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:29.176491    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:29.176548    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:29.187040    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:29.187102    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:29.197945    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:29.198017    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:29.207619    4006 logs.go:276] 0 containers: []
	W0815 16:58:29.207630    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:29.207675    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:29.218004    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:29.218019    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:29.218025    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:29.232040    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:29.232053    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:29.243418    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:29.243431    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:29.260774    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:29.260788    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:29.277482    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:29.277493    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:29.302209    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:29.302224    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:29.314302    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:29.314316    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:29.349108    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:29.349116    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:29.382429    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:29.382443    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:29.394559    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:29.394571    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:29.410476    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:29.410487    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:29.423055    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:29.423067    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:29.428063    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:29.428073    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:31.943166    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:36.945556    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:36.945682    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:36.957069    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:36.957147    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:36.967703    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:36.967774    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:36.978088    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:36.978155    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:36.988368    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:36.988428    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:37.001886    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:37.001953    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:37.017253    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:37.017322    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:37.027335    4006 logs.go:276] 0 containers: []
	W0815 16:58:37.027347    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:37.027405    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:37.038133    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:37.038146    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:37.038151    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:37.049227    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:37.049240    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:37.065261    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:37.065274    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:37.082104    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:37.082117    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:37.093788    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:37.093798    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:37.130353    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:37.130364    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:37.145202    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:37.145215    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:37.165322    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:37.165334    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:37.177301    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:37.177312    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:37.194690    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:37.194700    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:37.218362    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:37.218373    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:37.229786    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:37.229796    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:37.264673    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:37.264684    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:39.771431    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:44.773870    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:44.774044    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:44.790263    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:44.790351    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:44.816043    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:44.816115    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:44.827674    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:44.827744    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:44.838378    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:44.838451    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:44.848741    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:44.848811    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:44.859518    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:44.859589    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:44.869609    4006 logs.go:276] 0 containers: []
	W0815 16:58:44.869620    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:44.869679    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:44.880229    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:44.880243    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:44.880247    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:44.949219    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:44.949233    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:44.964549    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:44.964564    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:44.976437    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:44.976448    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:44.987984    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:44.987998    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:45.002544    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:45.002555    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:45.027073    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:45.027082    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:45.062242    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:45.062255    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:45.069365    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:45.069374    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:45.085429    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:45.085439    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:45.100564    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:45.100576    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:45.112996    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:45.113007    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:45.133710    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:45.133720    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:47.647011    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:52.649310    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:52.649399    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:52.660116    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:52.660182    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:52.671664    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:52.671743    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:52.682229    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:58:52.682302    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:52.692325    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:52.692387    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:52.703585    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:52.703663    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:52.713861    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:52.713932    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:52.724479    4006 logs.go:276] 0 containers: []
	W0815 16:58:52.724496    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:52.724556    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:52.736264    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:52.736279    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:58:52.736285    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:58:52.747128    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:52.747140    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:52.761807    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:52.761818    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:52.773653    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:52.773668    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:52.791821    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:52.791831    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:52.803839    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:52.803850    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:52.839288    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:52.839302    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:52.853960    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:52.853973    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:52.865097    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:52.865108    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:52.876662    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:52.876675    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:52.901280    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:52.901291    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:52.934802    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:52.934810    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:52.939122    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:52.939129    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:52.953329    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:58:52.953344    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:58:52.964694    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:52.964707    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:55.476814    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:00.478475    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:00.478550    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:00.490387    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:00.490459    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:00.506067    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:00.506135    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:00.517969    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:00.518038    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:00.529953    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:00.530076    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:00.541977    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:00.542047    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:00.553461    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:00.553536    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:00.564517    4006 logs.go:276] 0 containers: []
	W0815 16:59:00.564532    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:00.564588    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:00.575178    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:00.575194    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:00.575201    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:00.588791    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:00.588801    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:00.606720    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:00.606729    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:00.621788    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:00.621800    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:00.633442    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:00.633454    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:00.645165    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:00.645175    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:00.656612    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:00.656623    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:00.691749    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:00.691763    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:00.729476    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:00.729491    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:00.740958    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:00.740968    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:00.764341    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:00.764349    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:00.768813    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:00.768822    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:00.780697    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:00.780708    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:00.792538    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:00.792550    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:00.807691    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:00.807709    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:03.332706    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:08.335127    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:08.335311    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:08.347333    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:08.347400    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:08.358843    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:08.358908    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:08.370506    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:08.370583    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:08.382312    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:08.382394    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:08.393474    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:08.393545    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:08.404910    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:08.404987    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:08.416308    4006 logs.go:276] 0 containers: []
	W0815 16:59:08.416319    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:08.416382    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:08.427820    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:08.427838    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:08.427843    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:08.465924    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:08.465938    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:08.471447    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:08.471459    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:08.511823    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:08.511835    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:08.526686    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:08.526696    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:08.544998    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:08.545008    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:08.556364    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:08.556376    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:08.580556    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:08.580564    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:08.592053    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:08.592064    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:08.603976    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:08.603986    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:08.621906    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:08.621917    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:08.638631    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:08.638641    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:08.653530    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:08.653540    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:08.664968    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:08.664978    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:08.676968    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:08.676979    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:11.190532    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:16.192825    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:16.192927    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:16.208566    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:16.208642    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:16.220724    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:16.220795    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:16.232579    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:16.232716    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:16.243913    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:16.243980    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:16.254735    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:16.254797    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:16.266202    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:16.266269    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:16.277131    4006 logs.go:276] 0 containers: []
	W0815 16:59:16.277142    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:16.277200    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:16.288641    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:16.288661    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:16.288666    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:16.301944    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:16.301956    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:16.314129    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:16.314141    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:16.318798    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:16.318808    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:16.333821    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:16.333838    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:16.347173    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:16.347185    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:16.365215    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:16.365228    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:16.404328    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:16.404338    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:16.419437    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:16.419448    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:16.432497    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:16.432508    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:16.466509    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:16.466523    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:16.489765    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:16.489774    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:16.501218    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:16.501231    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:16.513482    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:16.513493    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:16.528455    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:16.528467    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:19.042767    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:24.043915    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:24.044015    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:24.058726    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:24.058788    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:24.070192    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:24.070266    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:24.081911    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:24.081989    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:24.093265    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:24.093335    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:24.104449    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:24.104517    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:24.116853    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:24.116929    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:24.127674    4006 logs.go:276] 0 containers: []
	W0815 16:59:24.127686    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:24.127744    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:24.140063    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:24.140081    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:24.140086    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:24.155810    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:24.155826    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:24.169705    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:24.169717    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:24.185480    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:24.185494    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:24.190652    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:24.190663    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:24.203835    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:24.203846    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:24.216382    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:24.216394    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:24.229342    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:24.229357    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:24.248525    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:24.248538    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:24.284936    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:24.284952    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:24.325439    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:24.325450    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:24.337433    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:24.337444    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:24.349664    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:24.349676    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:24.365823    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:24.365835    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:24.379974    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:24.379984    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:26.906544    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:31.908802    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:31.909065    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:31.932852    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:31.932949    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:31.950930    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:31.951010    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:31.968481    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:31.968519    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:31.981143    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:31.981199    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:31.992975    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:31.993027    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:32.005543    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:32.005611    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:32.022565    4006 logs.go:276] 0 containers: []
	W0815 16:59:32.022577    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:32.022643    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:32.034452    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:32.034470    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:32.034478    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:32.047551    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:32.047562    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:32.064483    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:32.064496    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:32.081235    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:32.081247    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:32.121098    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:32.121110    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:32.136644    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:32.136655    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:32.150785    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:32.150795    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:32.176663    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:32.176679    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:32.198502    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:32.198517    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:32.213354    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:32.213368    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:32.225851    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:32.225863    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:32.238991    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:32.239003    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:32.253612    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:32.253624    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:32.290962    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:32.290977    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:32.296527    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:32.296535    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:34.818305    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:39.821219    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:39.821596    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:39.851529    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:39.851666    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:39.875572    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:39.875655    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:39.890503    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:39.890544    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:39.903215    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:39.903281    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:39.919702    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:39.919779    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:39.931976    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:39.932011    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:39.948372    4006 logs.go:276] 0 containers: []
	W0815 16:59:39.948385    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:39.948449    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:39.966818    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:39.966838    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:39.966844    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:39.984327    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:39.984338    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:39.997024    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:39.997038    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:40.014652    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:40.014668    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:40.027207    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:40.027219    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:40.063910    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:40.063922    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:40.105601    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:40.105610    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:40.120688    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:40.120703    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:40.134161    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:40.134173    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:40.150397    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:40.150409    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:40.155153    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:40.155161    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:40.167137    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:40.167148    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:40.180224    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:40.180236    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:40.193360    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:40.193373    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:40.212561    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:40.212578    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:42.741505    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:47.744116    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:47.744355    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:47.769842    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:47.769964    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:47.786872    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:47.786952    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:47.800964    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:47.801047    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:47.812963    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:47.813039    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:47.827356    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:47.827426    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:47.839167    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:47.839239    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:47.850497    4006 logs.go:276] 0 containers: []
	W0815 16:59:47.850509    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:47.850567    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:47.862330    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:47.862348    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:47.862353    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:47.877376    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:47.877389    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:47.893737    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:47.893748    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:47.907350    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:47.907362    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:47.926211    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:47.926223    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:47.938974    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:47.938988    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:47.973632    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:47.973641    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:48.011466    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:48.011474    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:48.023710    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:48.023721    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:48.038823    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:48.038836    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:48.063293    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:48.063306    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:48.075902    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:48.075915    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:48.080915    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:48.080926    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:48.093432    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:48.093443    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:48.106246    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:48.106258    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:50.622461    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:55.625114    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:55.625402    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:55.650445    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:55.650549    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:55.666720    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:55.666796    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:55.680745    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:55.680826    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:55.692189    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:55.692258    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:55.702919    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:55.702991    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:55.723565    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:55.723632    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:55.737673    4006 logs.go:276] 0 containers: []
	W0815 16:59:55.737687    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:55.737745    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:55.748495    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:55.748521    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:55.748540    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:55.767137    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:55.767149    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:55.780003    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:55.780015    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:55.792702    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:55.792712    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:55.805714    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:55.805727    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:55.824474    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:55.824487    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:55.840120    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:55.840132    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:55.852801    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:55.852814    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:55.858106    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:55.858128    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:55.872955    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:55.872967    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:55.889409    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:55.889421    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:55.905312    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:55.905320    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:55.930509    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:55.930522    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:55.965534    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:55.965544    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:56.002416    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:56.002427    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:58.519276    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:03.521613    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:03.521741    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:03.533041    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:03.533126    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:03.544063    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:03.544134    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:03.554504    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:03.554583    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:03.565067    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:03.565136    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:03.576365    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:03.576435    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:03.590463    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:03.590536    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:03.602404    4006 logs.go:276] 0 containers: []
	W0815 17:00:03.602415    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:03.602473    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:03.615150    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:03.615168    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:03.615173    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:03.619656    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:03.619662    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:03.637845    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:03.637859    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:03.650207    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:03.650218    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:03.662513    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:03.662524    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:03.677488    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:03.677499    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:03.714704    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:03.714718    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:03.727989    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:03.727999    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:03.767022    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:03.767031    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:03.780100    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:03.780111    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:03.798990    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:03.799003    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:03.824068    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:03.824085    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:03.836942    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:03.836954    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:03.852064    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:03.852075    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:03.866549    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:03.866561    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:06.383791    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:11.385272    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:11.385376    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:11.396842    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:11.396908    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:11.408165    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:11.408241    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:11.423125    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:11.423188    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:11.433962    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:11.434031    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:11.444531    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:11.444603    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:11.455617    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:11.455686    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:11.465848    4006 logs.go:276] 0 containers: []
	W0815 17:00:11.465863    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:11.465924    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:11.476962    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:11.476979    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:11.476985    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:11.504863    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:11.504875    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:11.519452    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:11.519465    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:11.531288    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:11.531299    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:11.535642    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:11.535648    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:11.547435    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:11.547446    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:11.559665    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:11.559675    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:11.571002    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:11.571012    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:11.585771    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:11.585784    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:11.627218    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:11.627232    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:11.640188    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:11.640205    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:11.653070    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:11.653081    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:11.675720    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:11.675728    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:11.701816    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:11.701833    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:11.714853    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:11.714864    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:14.253516    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:19.256371    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:19.256711    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:19.298279    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:19.298417    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:19.320278    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:19.320372    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:19.335830    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:19.335912    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:19.351440    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:19.351506    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:19.361701    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:19.361783    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:19.372296    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:19.372369    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:19.383372    4006 logs.go:276] 0 containers: []
	W0815 17:00:19.383385    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:19.383444    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:19.398771    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:19.398788    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:19.398793    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:19.403293    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:19.403302    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:19.437928    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:19.437940    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:19.455903    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:19.455916    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:19.467666    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:19.467677    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:19.480105    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:19.480117    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:19.515588    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:19.515600    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:19.531302    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:19.531316    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:19.546269    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:19.546283    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:19.559557    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:19.559572    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:19.575354    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:19.575369    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:19.588789    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:19.588801    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:19.601805    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:19.601817    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:19.617290    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:19.617388    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:19.637497    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:19.637508    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:22.164204    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:27.167086    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:27.167616    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:27.211673    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:27.211809    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:27.231530    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:27.231624    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:27.248595    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:27.248671    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:27.260853    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:27.260925    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:27.271777    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:27.271845    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:27.282591    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:27.282655    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:27.296318    4006 logs.go:276] 0 containers: []
	W0815 17:00:27.296332    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:27.296397    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:27.307491    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:27.307509    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:27.307515    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:27.322106    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:27.322117    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:27.334097    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:27.334109    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:27.346876    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:27.346887    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:27.371317    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:27.371331    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:27.407904    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:27.407922    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:27.413446    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:27.413458    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:27.454606    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:27.454618    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:27.479316    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:27.479333    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:27.491766    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:27.491777    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:27.504715    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:27.504729    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:27.519503    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:27.519515    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:27.533335    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:27.533347    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:27.548045    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:27.548058    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:27.566558    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:27.566575    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:30.081526    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:35.083797    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:35.083942    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:35.095425    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:35.095492    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:35.106262    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:35.106334    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:35.117155    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:35.117236    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:35.128025    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:35.128093    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:35.139060    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:35.139131    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:35.149480    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:35.149552    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:35.159984    4006 logs.go:276] 0 containers: []
	W0815 17:00:35.159996    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:35.160051    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:35.171556    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:35.171572    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:35.171577    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:35.183501    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:35.183515    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:35.195196    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:35.195206    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:35.209761    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:35.209771    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:35.214881    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:35.214890    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:35.249318    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:35.249332    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:35.260852    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:35.260865    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:35.286265    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:35.286280    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:35.301003    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:35.301014    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:35.315952    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:35.315963    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:35.334624    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:35.334634    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:35.349445    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:35.349454    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:35.386048    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:35.386065    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:35.407140    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:35.407153    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:35.421936    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:35.421947    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:37.934260    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:42.936536    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:42.936704    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:42.947610    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:42.947681    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:42.958086    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:42.958152    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:42.969089    4006 logs.go:276] 4 containers: [d1bd85ce91e2 d5b496b8fd75 424cd520c960 9ce6c140fd49]
	I0815 17:00:42.969166    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:42.979925    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:42.979996    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:42.990380    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:42.990451    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:43.001321    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:43.001386    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:43.012706    4006 logs.go:276] 0 containers: []
	W0815 17:00:43.012718    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:43.012781    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:43.023447    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:43.023464    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:43.023469    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:43.040304    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:43.040317    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:43.052747    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:43.052760    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:43.072832    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:43.072842    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:43.109654    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:43.109666    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:43.122951    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:43.122963    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:43.134648    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:43.134658    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:43.158870    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:43.158882    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:43.170949    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:43.170961    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:43.206031    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:43.206040    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:43.222987    4006 logs.go:123] Gathering logs for coredns [d1bd85ce91e2] ...
	I0815 17:00:43.222998    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1bd85ce91e2"
	I0815 17:00:43.237058    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:43.237069    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:43.248460    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:43.248470    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:43.263035    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:43.263045    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:43.267597    4006 logs.go:123] Gathering logs for coredns [d5b496b8fd75] ...
	I0815 17:00:43.267604    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5b496b8fd75"
	I0815 17:00:45.781444    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:50.782578    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:50.786752    4006 out.go:201] 
	W0815 17:00:50.790740    4006 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0815 17:00:50.790746    4006 out.go:270] * 
	* 
	W0815 17:00:50.791231    4006 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:00:50.806736    4006 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-853000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-15 17:00:50.890012 -0700 PDT m=+3334.814395001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-853000 -n running-upgrade-853000
E0815 17:00:53.649905    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-853000 -n running-upgrade-853000: exit status 2 (15.581497125s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-853000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-246000          | force-systemd-flag-246000 | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-777000              | force-systemd-env-777000  | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-777000           | force-systemd-env-777000  | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT | 15 Aug 24 16:51 PDT |
	| start   | -p docker-flags-672000                | docker-flags-672000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-246000             | force-systemd-flag-246000 | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-246000          | force-systemd-flag-246000 | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT | 15 Aug 24 16:51 PDT |
	| start   | -p cert-expiration-703000             | cert-expiration-703000    | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-672000 ssh               | docker-flags-672000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-672000 ssh               | docker-flags-672000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-672000                | docker-flags-672000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT | 15 Aug 24 16:51 PDT |
	| start   | -p cert-options-617000                | cert-options-617000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-617000 ssh               | cert-options-617000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-617000 -- sudo        | cert-options-617000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-617000                | cert-options-617000       | jenkins | v1.33.1 | 15 Aug 24 16:51 PDT | 15 Aug 24 16:51 PDT |
	| start   | -p running-upgrade-853000             | minikube                  | jenkins | v1.26.0 | 15 Aug 24 16:51 PDT | 15 Aug 24 16:52 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-853000             | running-upgrade-853000    | jenkins | v1.33.1 | 15 Aug 24 16:52 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-703000             | cert-expiration-703000    | jenkins | v1.33.1 | 15 Aug 24 16:54 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-703000             | cert-expiration-703000    | jenkins | v1.33.1 | 15 Aug 24 16:54 PDT | 15 Aug 24 16:54 PDT |
	| start   | -p kubernetes-upgrade-559000          | kubernetes-upgrade-559000 | jenkins | v1.33.1 | 15 Aug 24 16:54 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-559000          | kubernetes-upgrade-559000 | jenkins | v1.33.1 | 15 Aug 24 16:54 PDT | 15 Aug 24 16:54 PDT |
	| start   | -p kubernetes-upgrade-559000          | kubernetes-upgrade-559000 | jenkins | v1.33.1 | 15 Aug 24 16:54 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-559000          | kubernetes-upgrade-559000 | jenkins | v1.33.1 | 15 Aug 24 16:54 PDT | 15 Aug 24 16:54 PDT |
	| start   | -p stopped-upgrade-889000             | minikube                  | jenkins | v1.26.0 | 15 Aug 24 16:54 PDT | 15 Aug 24 16:55 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-889000 stop           | minikube                  | jenkins | v1.26.0 | 15 Aug 24 16:55 PDT | 15 Aug 24 16:55 PDT |
	| start   | -p stopped-upgrade-889000             | stopped-upgrade-889000    | jenkins | v1.33.1 | 15 Aug 24 16:55 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:55:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:55:53.335899    4145 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:55:53.336057    4145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:55:53.336061    4145 out.go:358] Setting ErrFile to fd 2...
	I0815 16:55:53.336065    4145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:55:53.336236    4145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:55:53.337583    4145 out.go:352] Setting JSON to false
	I0815 16:55:53.357447    4145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3321,"bootTime":1723762832,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:55:53.357527    4145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:55:53.362571    4145 out.go:177] * [stopped-upgrade-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:55:53.369572    4145 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:55:53.369719    4145 notify.go:220] Checking for updates...
	I0815 16:55:53.376484    4145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:55:53.379573    4145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:55:53.382545    4145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:55:53.385509    4145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:55:53.388531    4145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:55:53.390141    4145 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:55:53.393518    4145 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 16:55:53.396523    4145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:55:53.400345    4145 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:55:53.407543    4145 start.go:297] selected driver: qemu2
	I0815 16:55:53.407548    4145 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:55:53.407592    4145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:55:53.410270    4145 cni.go:84] Creating CNI manager for ""
	I0815 16:55:53.410288    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:55:53.410311    4145 start.go:340] cluster config:
	{Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:55:53.410361    4145 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:55:53.416474    4145 out.go:177] * Starting "stopped-upgrade-889000" primary control-plane node in "stopped-upgrade-889000" cluster
	I0815 16:55:53.420552    4145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 16:55:53.420568    4145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0815 16:55:53.420577    4145 cache.go:56] Caching tarball of preloaded images
	I0815 16:55:53.420634    4145 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:55:53.420640    4145 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0815 16:55:53.420691    4145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/config.json ...
	I0815 16:55:53.421031    4145 start.go:360] acquireMachinesLock for stopped-upgrade-889000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:55:53.421064    4145 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "stopped-upgrade-889000"
	I0815 16:55:53.421073    4145 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:55:53.421078    4145 fix.go:54] fixHost starting: 
	I0815 16:55:53.421187    4145 fix.go:112] recreateIfNeeded on stopped-upgrade-889000: state=Stopped err=<nil>
	W0815 16:55:53.421196    4145 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:55:53.429436    4145 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-889000" ...
	I0815 16:55:50.094932    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:55:53.435482    4145 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:55:53.435554    4145 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50447-:22,hostfwd=tcp::50448-:2376,hostname=stopped-upgrade-889000 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/disk.qcow2
	I0815 16:55:53.480560    4145 main.go:141] libmachine: STDOUT: 
	I0815 16:55:53.480593    4145 main.go:141] libmachine: STDERR: 
	I0815 16:55:53.480599    4145 main.go:141] libmachine: Waiting for VM to start (ssh -p 50447 docker@127.0.0.1)...
	I0815 16:55:55.097878    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:55:55.098278    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:55:55.141099    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:55:55.141231    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:55:55.160811    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:55:55.160903    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:55:55.175249    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:55:55.175329    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:55:55.187292    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:55:55.187360    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:55:55.197958    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:55:55.198026    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:55:55.209001    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:55:55.209069    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:55:55.219717    4006 logs.go:276] 0 containers: []
	W0815 16:55:55.219731    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:55:55.219787    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:55:55.230309    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:55:55.230329    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:55:55.230334    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:55:55.244380    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:55:55.244393    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:55:55.260289    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:55:55.260304    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:55:55.275552    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:55:55.275564    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:55:55.299392    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:55:55.299406    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:55:55.304051    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:55:55.304060    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:55:55.318087    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:55:55.318100    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:55:55.336383    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:55:55.336395    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:55:55.347870    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:55:55.347885    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:55:55.366624    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:55:55.366636    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:55:55.379834    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:55:55.379848    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:55:55.391217    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:55:55.391230    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:55:55.434122    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:55:55.434136    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:55:55.447960    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:55:55.447972    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:55:55.487536    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:55:55.487546    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:55:55.504513    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:55:55.504524    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:55:55.516570    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:55:55.516579    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:55:58.036304    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:03.039024    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:03.039258    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:03.051296    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:03.051374    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:03.066962    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:03.067038    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:03.077174    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:03.077241    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:03.087822    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:03.087898    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:03.098878    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:03.098943    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:03.109622    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:03.109685    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:03.120579    4006 logs.go:276] 0 containers: []
	W0815 16:56:03.120591    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:03.120649    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:03.131782    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:03.131800    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:03.131805    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:03.136656    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:03.136662    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:03.158858    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:03.158872    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:03.176281    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:03.176291    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:03.189580    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:03.189590    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:03.201619    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:03.201632    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:03.243650    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:03.243657    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:03.257497    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:03.257506    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:03.276869    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:03.276879    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:03.291222    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:03.291235    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:03.303001    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:03.303013    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:03.327244    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:03.327255    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:03.364536    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:03.364556    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:03.382859    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:03.382872    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:03.396449    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:03.396460    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:03.412586    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:03.412599    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:03.450118    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:03.450129    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:05.966208    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:10.968757    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:10.969188    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:11.011212    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:11.011361    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:11.031754    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:11.031859    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:11.054898    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:11.054970    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:11.067231    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:11.067308    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:11.077552    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:11.077621    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:11.088222    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:11.088303    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:11.098699    4006 logs.go:276] 0 containers: []
	W0815 16:56:11.098712    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:11.098770    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:11.109597    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:11.109615    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:11.109621    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:11.148954    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:11.148962    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:11.166523    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:11.166536    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:11.177627    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:11.177640    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:11.189277    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:11.189287    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:11.203948    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:11.203958    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:11.215551    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:11.215562    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:11.230727    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:11.230737    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:11.242095    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:11.242107    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:11.265165    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:11.265171    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:11.277003    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:11.277014    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:11.281766    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:11.281771    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:11.296220    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:11.296233    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:11.313790    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:11.313803    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:11.333337    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:11.333346    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:11.368227    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:11.368236    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:11.388698    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:11.388712    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:13.861566    4145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/config.json ...
	I0815 16:56:13.862252    4145 machine.go:93] provisionDockerMachine start ...
	I0815 16:56:13.862444    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:13.862928    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:13.862942    4145 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:56:13.954681    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:56:13.954720    4145 buildroot.go:166] provisioning hostname "stopped-upgrade-889000"
	I0815 16:56:13.954819    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:13.955078    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:13.955094    4145 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-889000 && echo "stopped-upgrade-889000" | sudo tee /etc/hostname
	I0815 16:56:14.039805    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-889000
	
	I0815 16:56:14.039874    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.040051    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.040064    4145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-889000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-889000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-889000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:56:14.116883    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:56:14.116897    4145 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-964/.minikube}
	I0815 16:56:14.116913    4145 buildroot.go:174] setting up certificates
	I0815 16:56:14.116921    4145 provision.go:84] configureAuth start
	I0815 16:56:14.116926    4145 provision.go:143] copyHostCerts
	I0815 16:56:14.117009    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem, removing ...
	I0815 16:56:14.117016    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem
	I0815 16:56:14.117132    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem (1082 bytes)
	I0815 16:56:14.117341    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem, removing ...
	I0815 16:56:14.117346    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem
	I0815 16:56:14.117410    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem (1123 bytes)
	I0815 16:56:14.117529    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem, removing ...
	I0815 16:56:14.117534    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem
	I0815 16:56:14.117583    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem (1679 bytes)
	I0815 16:56:14.117695    4145 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-889000 san=[127.0.0.1 localhost minikube stopped-upgrade-889000]
	I0815 16:56:14.330948    4145 provision.go:177] copyRemoteCerts
	I0815 16:56:14.331001    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:56:14.331013    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 16:56:14.368303    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:56:14.375569    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 16:56:14.382766    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:56:14.389518    4145 provision.go:87] duration metric: took 272.585166ms to configureAuth
	I0815 16:56:14.389529    4145 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:56:14.389646    4145 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:56:14.389697    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.389798    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.389802    4145 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:56:14.462307    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:56:14.462317    4145 buildroot.go:70] root file system type: tmpfs
	I0815 16:56:14.462373    4145 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:56:14.462424    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.462555    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.462591    4145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:56:14.534503    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:56:14.534569    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.534701    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.534709    4145 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:56:14.926222    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:56:14.926234    4145 machine.go:96] duration metric: took 1.063960458s to provisionDockerMachine
	I0815 16:56:14.926241    4145 start.go:293] postStartSetup for "stopped-upgrade-889000" (driver="qemu2")
	I0815 16:56:14.926248    4145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:56:14.926315    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:56:14.926325    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 16:56:14.963662    4145 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:56:14.965010    4145 info.go:137] Remote host: Buildroot 2021.02.12
	I0815 16:56:14.965018    4145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/addons for local assets ...
	I0815 16:56:14.965095    4145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/files for local assets ...
	I0815 16:56:14.965189    4145 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem -> 14462.pem in /etc/ssl/certs
	I0815 16:56:14.965290    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:56:14.968020    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:56:14.975193    4145 start.go:296] duration metric: took 48.946125ms for postStartSetup
	I0815 16:56:14.975208    4145 fix.go:56] duration metric: took 21.553893708s for fixHost
	I0815 16:56:14.975253    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.975362    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.975367    4145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:56:15.045079    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723766174.752727629
	
	I0815 16:56:15.045087    4145 fix.go:216] guest clock: 1723766174.752727629
	I0815 16:56:15.045091    4145 fix.go:229] Guest: 2024-08-15 16:56:14.752727629 -0700 PDT Remote: 2024-08-15 16:56:14.975209 -0700 PDT m=+21.671272293 (delta=-222.481371ms)
	I0815 16:56:15.045103    4145 fix.go:200] guest clock delta is within tolerance: -222.481371ms
	I0815 16:56:15.045106    4145 start.go:83] releasing machines lock for "stopped-upgrade-889000", held for 21.623799583s
	I0815 16:56:15.045165    4145 ssh_runner.go:195] Run: cat /version.json
	I0815 16:56:15.045180    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 16:56:15.045165    4145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:56:15.045206    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	W0815 16:56:15.045840    4145 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50447: connect: connection refused
	I0815 16:56:15.045865    4145 retry.go:31] will retry after 334.022404ms: dial tcp [::1]:50447: connect: connection refused
	W0815 16:56:15.422762    4145 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0815 16:56:15.422835    4145 ssh_runner.go:195] Run: systemctl --version
	I0815 16:56:15.425508    4145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:56:15.427631    4145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:56:15.427690    4145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0815 16:56:15.431483    4145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0815 16:56:15.438057    4145 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:56:15.438081    4145 start.go:495] detecting cgroup driver to use...
	I0815 16:56:15.438169    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:56:15.445902    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0815 16:56:15.449409    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:56:15.452467    4145 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:56:15.452522    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:56:15.455940    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:56:15.458906    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:56:15.462125    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:56:15.464929    4145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:56:15.467851    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:56:15.470901    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:56:15.473971    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:56:15.476804    4145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:56:15.479998    4145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:56:15.483159    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:15.540876    4145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:56:15.549735    4145 start.go:495] detecting cgroup driver to use...
	I0815 16:56:15.549804    4145 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:56:15.556936    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:56:15.562070    4145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:56:15.571080    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:56:15.575223    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:56:15.579608    4145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:56:15.619353    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:56:15.624369    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:56:15.629595    4145 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:56:15.630847    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:56:15.633473    4145 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0815 16:56:15.638157    4145 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:56:15.696517    4145 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:56:15.757093    4145 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:56:15.757168    4145 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:56:15.762289    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:15.824648    4145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:56:16.970888    4145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146205875s)
	I0815 16:56:16.970946    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:56:16.976029    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:56:16.980956    4145 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:56:17.040810    4145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:56:17.118584    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:17.176289    4145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:56:17.182898    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:56:17.187462    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:17.251966    4145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:56:17.289249    4145 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:56:17.289341    4145 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:56:17.292860    4145 start.go:563] Will wait 60s for crictl version
	I0815 16:56:17.292923    4145 ssh_runner.go:195] Run: which crictl
	I0815 16:56:17.294414    4145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:56:17.308766    4145 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0815 16:56:17.308846    4145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:56:17.325352    4145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:56:17.346532    4145 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0815 16:56:17.346597    4145 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0815 16:56:17.347896    4145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:56:17.351677    4145 kubeadm.go:883] updating cluster {Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0815 16:56:17.351720    4145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 16:56:17.351764    4145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:56:17.364300    4145 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 16:56:17.364311    4145 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 16:56:17.364360    4145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 16:56:17.367991    4145 ssh_runner.go:195] Run: which lz4
	I0815 16:56:17.369423    4145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 16:56:17.370650    4145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 16:56:17.370659    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0815 16:56:18.304203    4145 docker.go:649] duration metric: took 934.799333ms to copy over tarball
	I0815 16:56:18.304258    4145 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 16:56:13.902232    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:19.482834    4145 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.178548166s)
	I0815 16:56:19.482848    4145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 16:56:19.498048    4145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 16:56:19.500856    4145 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0815 16:56:19.505627    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:19.582113    4145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:56:21.238752    4145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.65659925s)
	I0815 16:56:21.238847    4145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:56:21.249551    4145 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 16:56:21.249560    4145 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 16:56:21.249565    4145 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 16:56:21.254771    4145 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.257185    4145 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:21.259200    4145 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.259797    4145 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.261856    4145 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:21.261943    4145 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 16:56:21.263140    4145 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.263277    4145 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.264351    4145 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.264476    4145 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 16:56:21.265131    4145 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.265420    4145 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.266579    4145 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.267025    4145 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.267828    4145 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.268506    4145 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.615274    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.627299    4145 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0815 16:56:21.627335    4145 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.627384    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.630802    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0815 16:56:21.634588    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.645047    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0815 16:56:21.645078    4145 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0815 16:56:21.645095    4145 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0815 16:56:21.645138    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0815 16:56:21.654063    4145 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0815 16:56:21.654083    4145 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.654134    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.665240    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0815 16:56:21.665358    4145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0815 16:56:21.668962    4145 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 16:56:21.669076    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.670904    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0815 16:56:21.670926    4145 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0815 16:56:21.670938    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0815 16:56:21.677862    4145 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 16:56:21.677877    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0815 16:56:21.688702    4145 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0815 16:56:21.688731    4145 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.688786    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.703719    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.716625    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 16:56:21.716747    4145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 16:56:21.716790    4145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0815 16:56:21.718247    4145 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0815 16:56:21.718263    4145 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.718302    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.719210    4145 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0815 16:56:21.719228    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0815 16:56:21.722169    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.740466    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0815 16:56:21.760487    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.763717    4145 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0815 16:56:21.763737    4145 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.763785    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.780050    4145 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 16:56:21.780066    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0815 16:56:21.789469    4145 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0815 16:56:21.789491    4145 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.789542    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.804303    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0815 16:56:21.830427    4145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0815 16:56:21.830456    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0815 16:56:22.175121    4145 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 16:56:22.175629    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:22.213238    4145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0815 16:56:22.213282    4145 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:22.213380    4145 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:22.239280    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 16:56:22.239451    4145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 16:56:22.241758    4145 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0815 16:56:22.241778    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0815 16:56:22.275466    4145 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 16:56:22.275480    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0815 16:56:22.515040    4145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 16:56:22.515081    4145 cache_images.go:92] duration metric: took 1.265494667s to LoadCachedImages
	W0815 16:56:22.515116    4145 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0815 16:56:22.515122    4145 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0815 16:56:22.515170    4145 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-889000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:56:22.515238    4145 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:56:22.529309    4145 cni.go:84] Creating CNI manager for ""
	I0815 16:56:22.529321    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:56:22.529326    4145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:56:22.529335    4145 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-889000 NodeName:stopped-upgrade-889000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:56:22.529410    4145 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-889000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:56:22.529466    4145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0815 16:56:22.532401    4145 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:56:22.532430    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 16:56:22.535469    4145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0815 16:56:22.540483    4145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:56:22.545527    4145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0815 16:56:22.550561    4145 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0815 16:56:22.551849    4145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:56:22.555767    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:22.621917    4145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:56:22.628315    4145 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000 for IP: 10.0.2.15
	I0815 16:56:22.628323    4145 certs.go:194] generating shared ca certs ...
	I0815 16:56:22.628335    4145 certs.go:226] acquiring lock for ca certs: {Name:mk1fa67494d9857cf8e0d98ec65576a15d2cd3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:22.628487    4145 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key
	I0815 16:56:22.628524    4145 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key
	I0815 16:56:22.628529    4145 certs.go:256] generating profile certs ...
	I0815 16:56:22.628593    4145 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.key
	I0815 16:56:22.628614    4145 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b
	I0815 16:56:22.628625    4145 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0815 16:56:22.867768    4145 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b ...
	I0815 16:56:22.867786    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b: {Name:mk67aa5da0e72bcf848236e37ade401b9d14c0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:22.868404    4145 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b ...
	I0815 16:56:22.868412    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b: {Name:mk546f651669edc022ebf3798e841d2a806750d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:22.868545    4145 certs.go:381] copying /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b -> /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt
	I0815 16:56:22.868709    4145 certs.go:385] copying /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b -> /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key
	I0815 16:56:22.868864    4145 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/proxy-client.key
	I0815 16:56:22.869010    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem (1338 bytes)
	W0815 16:56:22.869033    4145 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446_empty.pem, impossibly tiny 0 bytes
	I0815 16:56:22.869039    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 16:56:22.869082    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:56:22.869109    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:56:22.869134    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem (1679 bytes)
	I0815 16:56:22.869187    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:56:22.869547    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:56:22.876954    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 16:56:22.884136    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:56:22.891461    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 16:56:22.898475    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 16:56:22.905274    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:56:22.911953    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:56:22.919198    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:56:22.926443    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:56:22.932951    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem --> /usr/share/ca-certificates/1446.pem (1338 bytes)
	I0815 16:56:22.939791    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /usr/share/ca-certificates/14462.pem (1708 bytes)
	I0815 16:56:22.946961    4145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:56:22.951992    4145 ssh_runner.go:195] Run: openssl version
	I0815 16:56:22.953944    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:56:22.956707    4145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:56:22.958155    4145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:56:22.958178    4145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:56:22.959871    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:56:22.963256    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1446.pem && ln -fs /usr/share/ca-certificates/1446.pem /etc/ssl/certs/1446.pem"
	I0815 16:56:22.966023    4145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1446.pem
	I0815 16:56:22.967404    4145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:13 /usr/share/ca-certificates/1446.pem
	I0815 16:56:22.967422    4145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1446.pem
	I0815 16:56:22.969365    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1446.pem /etc/ssl/certs/51391683.0"
	I0815 16:56:22.972536    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14462.pem && ln -fs /usr/share/ca-certificates/14462.pem /etc/ssl/certs/14462.pem"
	I0815 16:56:22.975946    4145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14462.pem
	I0815 16:56:22.977464    4145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:13 /usr/share/ca-certificates/14462.pem
	I0815 16:56:22.977482    4145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14462.pem
	I0815 16:56:22.979200    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14462.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:56:22.982194    4145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:56:22.983619    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:56:22.985567    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:56:22.987307    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:56:22.989557    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:56:22.991312    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:56:22.993202    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:56:22.995051    4145 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:56:22.995118    4145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:56:23.005701    4145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:56:23.008664    4145 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:56:23.008670    4145 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:56:23.008694    4145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:56:23.012598    4145 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:56:23.012883    4145 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-889000" does not appear in /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:56:23.012978    4145 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-964/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-889000" cluster setting kubeconfig missing "stopped-upgrade-889000" context setting]
	I0815 16:56:23.013154    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:23.013622    4145 kapi.go:59] client config for stopped-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:56:23.013947    4145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:56:23.016633    4145 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-889000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0815 16:56:23.016638    4145 kubeadm.go:1160] stopping kube-system containers ...
	I0815 16:56:23.016680    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:56:23.027332    4145 docker.go:483] Stopping containers: [83b99d5f50de 70b7213c6b52 0a558c6ba534 8a3ae34e9cb3 b3f17efb3bfe 88d6c111039f 659d72bec753 b1d53cd33942 d5d0b7ba9f28]
	I0815 16:56:23.027392    4145 ssh_runner.go:195] Run: docker stop 83b99d5f50de 70b7213c6b52 0a558c6ba534 8a3ae34e9cb3 b3f17efb3bfe 88d6c111039f 659d72bec753 b1d53cd33942 d5d0b7ba9f28
	I0815 16:56:23.038355    4145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 16:56:23.043751    4145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 16:56:23.046855    4145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 16:56:23.046861    4145 kubeadm.go:157] found existing configuration files:
	
	I0815 16:56:23.046889    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf
	I0815 16:56:23.049122    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 16:56:23.049146    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 16:56:23.052162    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf
	I0815 16:56:23.055067    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 16:56:23.055088    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 16:56:23.057698    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf
	I0815 16:56:23.060134    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 16:56:23.060157    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 16:56:23.063235    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf
	I0815 16:56:23.065602    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 16:56:23.065625    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 16:56:23.068128    4145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 16:56:23.071092    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.094290    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:18.904582    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:18.904700    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:18.916680    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:18.916761    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:18.929962    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:18.930054    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:18.944733    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:18.944832    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:18.956853    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:18.956924    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:18.973011    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:18.973084    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:18.991716    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:18.991792    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:19.008048    4006 logs.go:276] 0 containers: []
	W0815 16:56:19.008061    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:19.008135    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:19.020064    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:19.020084    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:19.020089    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:19.061263    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:19.061277    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:19.076109    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:19.076124    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:19.103271    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:19.103295    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:19.108574    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:19.108587    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:19.124691    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:19.124709    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:19.140729    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:19.140742    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:19.157597    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:19.157611    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:19.170079    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:19.170093    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:19.195997    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:19.196011    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:19.211663    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:19.211681    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:19.228777    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:19.228792    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:19.243408    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:19.243420    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:19.256476    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:19.256489    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:19.303236    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:19.303262    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:19.325375    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:19.325394    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:19.338466    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:19.338477    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:21.859782    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:23.742187    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.854059    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.886530    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.908124    4145 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:56:23.908200    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:24.410266    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:24.910274    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:24.914361    4145 api_server.go:72] duration metric: took 1.006227917s to wait for apiserver process to appear ...
	I0815 16:56:24.914372    4145 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:56:24.914384    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:26.861009    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:26.861127    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:26.872643    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:26.872723    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:26.884015    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:26.884083    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:26.895206    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:26.895284    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:26.907137    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:26.907211    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:26.918955    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:26.919027    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:26.934353    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:26.934426    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:26.945784    4006 logs.go:276] 0 containers: []
	W0815 16:56:26.945800    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:26.945864    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:26.960096    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:26.960118    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:26.960123    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:27.001035    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:27.001050    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:27.021576    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:27.021593    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:27.035959    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:27.035970    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:27.047826    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:27.047837    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:27.064340    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:27.064351    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:27.082600    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:27.082615    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:27.094451    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:27.094461    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:27.106405    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:27.106417    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:27.110843    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:27.110850    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:27.146722    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:27.146738    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:27.161611    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:27.161622    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:27.176012    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:27.176023    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:27.190351    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:27.190362    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:27.208499    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:27.208509    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:27.220311    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:27.220327    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:27.245459    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:27.245484    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:29.916590    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:29.916617    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:29.760651    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:34.917331    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:34.917358    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:34.763032    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:34.763276    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:56:34.787502    4006 logs.go:276] 2 containers: [095f0eb8e679 939e94e6f10f]
	I0815 16:56:34.787618    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:56:34.803668    4006 logs.go:276] 2 containers: [157e7eb31d6f b2768a0d890b]
	I0815 16:56:34.803766    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:56:34.818259    4006 logs.go:276] 1 containers: [c7c3829502b7]
	I0815 16:56:34.818330    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:56:34.829553    4006 logs.go:276] 2 containers: [379f405cc42c 9bb15813a91e]
	I0815 16:56:34.829613    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:56:34.864991    4006 logs.go:276] 1 containers: [460a9c5e2574]
	I0815 16:56:34.865072    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:56:34.877426    4006 logs.go:276] 2 containers: [ea15458b47dc e794e6c79e18]
	I0815 16:56:34.877493    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:56:34.888934    4006 logs.go:276] 0 containers: []
	W0815 16:56:34.888951    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:56:34.889035    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:56:34.899841    4006 logs.go:276] 2 containers: [aa28529f4138 43ea3f8a4bfd]
	I0815 16:56:34.899857    4006 logs.go:123] Gathering logs for kube-scheduler [379f405cc42c] ...
	I0815 16:56:34.899862    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 379f405cc42c"
	I0815 16:56:34.911545    4006 logs.go:123] Gathering logs for etcd [157e7eb31d6f] ...
	I0815 16:56:34.911559    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 157e7eb31d6f"
	I0815 16:56:34.926113    4006 logs.go:123] Gathering logs for coredns [c7c3829502b7] ...
	I0815 16:56:34.926124    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c3829502b7"
	I0815 16:56:34.937965    4006 logs.go:123] Gathering logs for kube-scheduler [9bb15813a91e] ...
	I0815 16:56:34.937976    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb15813a91e"
	I0815 16:56:34.952301    4006 logs.go:123] Gathering logs for kube-controller-manager [ea15458b47dc] ...
	I0815 16:56:34.952314    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea15458b47dc"
	I0815 16:56:34.969908    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:56:34.969919    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:56:34.993019    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:56:34.993029    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:56:35.034845    4006 logs.go:123] Gathering logs for etcd [b2768a0d890b] ...
	I0815 16:56:35.034860    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2768a0d890b"
	I0815 16:56:35.052384    4006 logs.go:123] Gathering logs for storage-provisioner [43ea3f8a4bfd] ...
	I0815 16:56:35.052394    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43ea3f8a4bfd"
	I0815 16:56:35.063879    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:56:35.063890    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:56:35.077398    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:56:35.077412    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:56:35.111416    4006 logs.go:123] Gathering logs for kube-apiserver [095f0eb8e679] ...
	I0815 16:56:35.111428    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 095f0eb8e679"
	I0815 16:56:35.126439    4006 logs.go:123] Gathering logs for kube-apiserver [939e94e6f10f] ...
	I0815 16:56:35.126452    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939e94e6f10f"
	I0815 16:56:35.145871    4006 logs.go:123] Gathering logs for kube-proxy [460a9c5e2574] ...
	I0815 16:56:35.145884    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460a9c5e2574"
	I0815 16:56:35.157762    4006 logs.go:123] Gathering logs for kube-controller-manager [e794e6c79e18] ...
	I0815 16:56:35.157774    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e794e6c79e18"
	I0815 16:56:35.170193    4006 logs.go:123] Gathering logs for storage-provisioner [aa28529f4138] ...
	I0815 16:56:35.170204    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa28529f4138"
	I0815 16:56:35.182099    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:56:35.182112    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:56:37.687726    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:39.917842    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:39.917867    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:42.690103    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:42.690181    4006 kubeadm.go:597] duration metric: took 4m4.254291083s to restartPrimaryControlPlane
	W0815 16:56:42.690219    4006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 16:56:42.690238    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0815 16:56:43.709900    4006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.019640125s)
	I0815 16:56:43.709964    4006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:56:43.715003    4006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 16:56:43.717995    4006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 16:56:43.720845    4006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 16:56:43.720851    4006 kubeadm.go:157] found existing configuration files:
	
	I0815 16:56:43.720871    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/admin.conf
	I0815 16:56:43.723518    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 16:56:43.723545    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 16:56:43.726405    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/kubelet.conf
	I0815 16:56:43.729599    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 16:56:43.729624    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 16:56:43.733119    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/controller-manager.conf
	I0815 16:56:43.736057    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 16:56:43.736081    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 16:56:43.738725    4006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/scheduler.conf
	I0815 16:56:43.741503    4006 kubeadm.go:163] "https://control-plane.minikube.internal:50257" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50257 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 16:56:43.741527    4006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 16:56:43.744581    4006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 16:56:43.762526    4006 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0815 16:56:43.762568    4006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 16:56:43.814945    4006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 16:56:43.814997    4006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 16:56:43.815056    4006 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 16:56:43.867727    4006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 16:56:43.871884    4006 out.go:235]   - Generating certificates and keys ...
	I0815 16:56:43.871946    4006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 16:56:43.872018    4006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 16:56:43.872066    4006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 16:56:43.872167    4006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 16:56:43.872314    4006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 16:56:43.872376    4006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 16:56:43.872441    4006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 16:56:43.872482    4006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 16:56:43.872599    4006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 16:56:43.872671    4006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 16:56:43.872732    4006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 16:56:43.872774    4006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 16:56:43.994908    4006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 16:56:44.114740    4006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 16:56:44.155223    4006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 16:56:44.205468    4006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 16:56:44.237123    4006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 16:56:44.237454    4006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 16:56:44.237508    4006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 16:56:44.330009    4006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 16:56:44.918476    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:44.918560    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:44.334632    4006 out.go:235]   - Booting up control plane ...
	I0815 16:56:44.334675    4006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 16:56:44.334712    4006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 16:56:44.334751    4006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 16:56:44.334794    4006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 16:56:44.339727    4006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 16:56:48.342401    4006 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002324 seconds
	I0815 16:56:48.342584    4006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 16:56:48.346913    4006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 16:56:48.855926    4006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 16:56:48.856831    4006 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-853000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 16:56:49.362216    4006 kubeadm.go:310] [bootstrap-token] Using token: 1q2sqo.90ak6svcf6z91vtn
	I0815 16:56:49.368591    4006 out.go:235]   - Configuring RBAC rules ...
	I0815 16:56:49.368668    4006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 16:56:49.368764    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 16:56:49.374029    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 16:56:49.375366    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 16:56:49.376569    4006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 16:56:49.377700    4006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 16:56:49.382010    4006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 16:56:49.553224    4006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 16:56:49.767949    4006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 16:56:49.768343    4006 kubeadm.go:310] 
	I0815 16:56:49.768374    4006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 16:56:49.768379    4006 kubeadm.go:310] 
	I0815 16:56:49.768416    4006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 16:56:49.768420    4006 kubeadm.go:310] 
	I0815 16:56:49.768433    4006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 16:56:49.768478    4006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 16:56:49.768510    4006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 16:56:49.768512    4006 kubeadm.go:310] 
	I0815 16:56:49.768546    4006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 16:56:49.768552    4006 kubeadm.go:310] 
	I0815 16:56:49.768588    4006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 16:56:49.768594    4006 kubeadm.go:310] 
	I0815 16:56:49.768633    4006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 16:56:49.768683    4006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 16:56:49.768759    4006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 16:56:49.768776    4006 kubeadm.go:310] 
	I0815 16:56:49.768972    4006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 16:56:49.769123    4006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 16:56:49.769128    4006 kubeadm.go:310] 
	I0815 16:56:49.769173    4006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1q2sqo.90ak6svcf6z91vtn \
	I0815 16:56:49.769260    4006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e \
	I0815 16:56:49.769273    4006 kubeadm.go:310] 	--control-plane 
	I0815 16:56:49.769276    4006 kubeadm.go:310] 
	I0815 16:56:49.769318    4006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 16:56:49.769324    4006 kubeadm.go:310] 
	I0815 16:56:49.769389    4006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1q2sqo.90ak6svcf6z91vtn \
	I0815 16:56:49.769469    4006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e 
	I0815 16:56:49.769575    4006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 16:56:49.769585    4006 cni.go:84] Creating CNI manager for ""
	I0815 16:56:49.769594    4006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:56:49.772494    4006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 16:56:49.778459    4006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 16:56:49.783095    4006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 16:56:49.787899    4006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 16:56:49.787974    4006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 16:56:49.787975    4006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-853000 minikube.k8s.io/updated_at=2024_08_15T16_56_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=running-upgrade-853000 minikube.k8s.io/primary=true
	I0815 16:56:49.791125    4006 ops.go:34] apiserver oom_adj: -16
	I0815 16:56:49.837298    4006 kubeadm.go:1113] duration metric: took 49.3635ms to wait for elevateKubeSystemPrivileges
	I0815 16:56:49.837312    4006 kubeadm.go:394] duration metric: took 4m11.414990583s to StartCluster
	I0815 16:56:49.837322    4006 settings.go:142] acquiring lock: {Name:mk3ef55eecb064d007fbd1b55ea891b5b51acd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:49.837408    4006 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:56:49.837778    4006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:49.837971    4006 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:56:49.838064    4006 config.go:182] Loaded profile config "running-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:56:49.838056    4006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:56:49.838095    4006 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-853000"
	I0815 16:56:49.838102    4006 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-853000"
	I0815 16:56:49.838107    4006 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-853000"
	W0815 16:56:49.838111    4006 addons.go:243] addon storage-provisioner should already be in state true
	I0815 16:56:49.838115    4006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-853000"
	I0815 16:56:49.838123    4006 host.go:66] Checking if "running-upgrade-853000" exists ...
	I0815 16:56:49.839096    4006 kapi.go:59] client config for running-upgrade-853000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/running-upgrade-853000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104479610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:56:49.839229    4006 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-853000"
	W0815 16:56:49.839234    4006 addons.go:243] addon default-storageclass should already be in state true
	I0815 16:56:49.839241    4006 host.go:66] Checking if "running-upgrade-853000" exists ...
	I0815 16:56:49.842529    4006 out.go:177] * Verifying Kubernetes components...
	I0815 16:56:49.842861    4006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 16:56:49.846573    4006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 16:56:49.846579    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:56:49.850443    4006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:49.919433    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:49.919453    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:49.853387    4006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:49.857453    4006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 16:56:49.857460    4006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 16:56:49.857466    4006 sshutil.go:53] new ssh client: &{IP:localhost Port:50225 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/running-upgrade-853000/id_rsa Username:docker}
	I0815 16:56:49.945537    4006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:56:49.951070    4006 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:56:49.951113    4006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:49.955067    4006 api_server.go:72] duration metric: took 117.082958ms to wait for apiserver process to appear ...
	I0815 16:56:49.955077    4006 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:56:49.955083    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:49.986617    4006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 16:56:50.009687    4006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 16:56:50.328715    4006 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 16:56:50.328726    4006 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 16:56:54.920547    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:54.920636    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:54.957341    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:54.957420    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:59.922381    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:59.922404    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:59.958074    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:59.958100    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:04.924085    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:04.924114    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:04.958561    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:04.958586    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:09.926256    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:09.926291    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:09.959147    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:09.959187    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:14.928564    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:14.928589    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:14.959914    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:14.959939    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:19.960631    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:19.960646    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0815 16:57:20.331497    4006 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0815 16:57:20.340653    4006 out.go:177] * Enabled addons: storage-provisioner
	I0815 16:57:19.930941    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:19.931022    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:20.347745    4006 addons.go:510] duration metric: took 30.50935275s for enable addons: enabled=[storage-provisioner]
	I0815 16:57:24.933360    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:24.933604    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:24.954229    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:24.954330    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:24.970638    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:24.970724    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:24.983872    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:24.983939    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:24.994785    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:24.994858    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:25.004865    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:25.004939    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:25.015276    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:25.015361    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:25.025358    4145 logs.go:276] 0 containers: []
	W0815 16:57:25.025370    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:25.025427    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:25.036759    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:25.036776    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:25.036782    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:25.051060    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:25.051070    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:25.062047    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:25.062058    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:25.074513    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:25.074525    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:25.086774    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:25.086789    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:25.104871    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:25.104883    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:25.120683    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:25.120699    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:25.132430    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:25.132447    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:25.169939    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:25.169948    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:25.174389    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:25.174400    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:25.189406    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:25.189418    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:25.215349    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:25.215357    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:25.228101    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:25.228115    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:25.309247    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:25.309261    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:25.349186    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:25.349199    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:25.365359    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:25.365370    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:25.379353    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:25.379366    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:27.893939    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:24.961694    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:24.961721    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:32.896366    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:32.896738    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:32.931660    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:32.931802    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:32.951369    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:32.951475    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:32.965835    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:32.965912    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:32.978271    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:32.978364    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:32.989468    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:32.989547    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:33.000244    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:33.000316    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:33.010079    4145 logs.go:276] 0 containers: []
	W0815 16:57:33.010089    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:33.010145    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:33.020497    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:33.020515    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:33.020520    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:33.024493    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:33.024503    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:33.038358    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:33.038369    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:33.060427    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:33.060441    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:33.077335    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:33.077349    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:33.088917    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:33.088929    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:33.101198    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:33.101211    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:33.139950    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:33.139959    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:33.175389    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:33.175399    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:33.187712    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:33.187725    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:33.205775    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:33.205785    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:33.217066    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:33.217078    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:33.229231    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:33.229243    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:33.242991    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:33.243001    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:33.280606    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:33.280617    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:33.298116    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:33.298128    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:33.309929    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:33.309952    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:29.963087    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:29.963128    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:35.837739    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:34.964798    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:34.964840    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:40.840365    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:40.840720    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:40.878579    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:40.878718    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:40.903474    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:40.903572    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:40.918368    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:40.918456    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:40.935634    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:40.935726    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:40.947585    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:40.947654    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:40.958168    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:40.958226    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:40.968536    4145 logs.go:276] 0 containers: []
	W0815 16:57:40.968548    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:40.968612    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:40.979270    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:40.979289    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:40.979295    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:40.991575    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:40.991586    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:41.010309    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:41.010320    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:41.033806    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:41.033816    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:41.070745    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:41.070760    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:41.109202    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:41.109217    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:41.125100    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:41.125113    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:41.140643    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:41.140656    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:41.152194    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:41.152204    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:41.165634    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:41.165646    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:41.185503    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:41.185517    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:41.197130    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:41.197141    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:41.214473    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:41.214483    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:41.219645    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:41.219655    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:41.254050    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:41.254063    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:41.268068    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:41.268080    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:41.281780    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:41.281791    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:39.965731    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:39.965780    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:43.794031    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:44.968187    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:44.968236    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:48.796454    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:48.796602    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:48.815217    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:48.815309    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:48.829786    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:48.829873    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:48.841501    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:48.841573    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:48.851827    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:48.851902    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:48.862307    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:48.862369    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:48.873167    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:48.873225    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:48.883520    4145 logs.go:276] 0 containers: []
	W0815 16:57:48.883532    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:48.883590    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:48.894127    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:48.894148    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:48.894156    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:48.934609    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:48.934619    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:48.948737    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:48.948751    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:48.959634    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:48.959646    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:48.973589    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:48.973599    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:48.989229    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:48.989240    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:49.012174    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:49.012183    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:49.026050    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:49.026061    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:49.040867    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:49.040884    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:49.052556    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:49.052567    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:49.063708    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:49.063724    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:49.075301    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:49.075315    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:49.079732    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:49.079742    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:49.117050    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:49.117064    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:49.128838    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:49.128851    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:49.147291    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:49.147302    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:49.182242    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:49.182257    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:51.699212    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:49.970588    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:49.970789    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:50.009767    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:57:50.009846    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:50.024665    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:57:50.024738    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:50.035056    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:57:50.035125    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:50.045122    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:57:50.045198    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:50.055238    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:57:50.055317    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:50.071194    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:57:50.071263    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:50.081638    4006 logs.go:276] 0 containers: []
	W0815 16:57:50.081648    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:50.081710    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:50.092593    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:57:50.092608    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:57:50.092613    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:50.104044    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:50.104057    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:50.108509    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:57:50.108517    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:57:50.123033    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:57:50.123046    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:57:50.134907    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:57:50.134920    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:57:50.153055    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:57:50.153065    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:57:50.164750    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:50.164760    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:50.187645    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:50.187653    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:50.220465    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:50.220472    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:50.256722    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:57:50.256734    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:57:50.270758    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:57:50.270769    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:57:50.286277    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:57:50.286289    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:57:50.301678    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:57:50.301692    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:57:52.816132    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:56.701566    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:56.701822    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:56.725102    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:56.725206    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:56.740534    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:56.740608    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:56.752884    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:56.752952    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:56.763506    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:56.763572    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:56.773484    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:56.773548    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:56.784441    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:56.784498    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:56.794233    4145 logs.go:276] 0 containers: []
	W0815 16:57:56.794245    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:56.794295    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:56.804913    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:56.804930    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:56.804935    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:56.844319    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:56.844329    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:56.858177    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:56.858190    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:56.873610    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:56.873623    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:56.885209    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:56.885221    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:56.920519    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:56.920530    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:56.958546    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:56.958556    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:56.970442    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:56.970454    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:56.988377    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:56.988391    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:57.002840    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:57.002853    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:57.020377    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:57.020389    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:57.039829    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:57.039839    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:57.051536    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:57.051550    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:57.065498    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:57.065514    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:57.070053    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:57.070060    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:57.083729    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:57.083741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:57.097502    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:57.097516    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:57.818786    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:57.818952    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:57.829803    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:57:57.829880    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:57.840282    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:57:57.840353    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:57.850766    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:57:57.850836    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:57.861292    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:57:57.861354    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:57.871776    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:57:57.871842    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:57.882701    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:57:57.882771    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:57.893666    4006 logs.go:276] 0 containers: []
	W0815 16:57:57.893675    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:57.893728    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:57.904068    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:57:57.904094    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:57.904099    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:57.937797    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:57.937806    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:57.974922    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:57:57.974933    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:57:57.989575    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:57:57.989586    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:57:58.006605    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:57:58.006616    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:58.018454    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:57:58.018471    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:57:58.030344    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:57:58.030357    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:57:58.041783    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:58.041794    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:58.064744    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:58.064755    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:58.068904    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:57:58.068912    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:57:58.083114    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:57:58.083128    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:57:58.094887    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:57:58.094901    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:57:58.106991    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:57:58.107005    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:57:59.626782    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:00.624376    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:04.629138    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:04.629271    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:04.643758    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:04.643838    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:04.655635    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:04.655702    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:04.665833    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:04.665899    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:04.676661    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:04.676734    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:04.687811    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:04.687873    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:04.698555    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:04.698622    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:04.708521    4145 logs.go:276] 0 containers: []
	W0815 16:58:04.708533    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:04.708589    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:04.718696    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:04.718713    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:04.718720    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:04.722652    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:04.722662    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:04.757688    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:04.757700    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:04.769513    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:04.769524    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:04.786588    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:04.786599    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:04.804782    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:04.804794    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:04.843489    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:04.843497    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:04.858730    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:04.858741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:04.870476    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:04.870488    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:04.882474    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:04.882485    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:04.894333    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:04.894346    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:04.908822    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:04.908833    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:04.950545    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:04.950559    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:04.964691    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:04.964705    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:04.975988    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:04.976003    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:04.999606    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:04.999615    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:05.013549    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:05.013563    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:07.527138    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:05.626826    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:05.626947    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:05.638939    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:05.639019    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:05.653826    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:05.653900    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:05.664536    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:05.664604    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:05.677209    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:05.677271    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:05.687606    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:05.687673    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:05.698052    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:05.698119    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:05.707785    4006 logs.go:276] 0 containers: []
	W0815 16:58:05.707797    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:05.707853    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:05.718724    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:05.718739    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:05.718744    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:05.754279    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:05.754288    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:05.759177    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:05.759186    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:05.794227    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:05.794241    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:05.809385    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:05.809396    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:05.829560    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:05.829575    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:05.841127    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:05.841141    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:05.863880    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:05.863888    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:05.878214    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:05.878231    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:05.892492    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:05.892507    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:05.904386    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:05.904396    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:05.916341    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:05.916351    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:05.928250    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:05.928261    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:08.441500    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:12.529590    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:12.529771    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:12.544424    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:12.544504    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:12.556495    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:12.556569    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:12.567364    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:12.567435    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:12.577970    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:12.578038    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:12.588207    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:12.588285    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:12.599027    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:12.599099    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:12.609456    4145 logs.go:276] 0 containers: []
	W0815 16:58:12.609475    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:12.609531    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:12.619978    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:12.619994    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:12.619999    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:12.631248    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:12.631260    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:12.642107    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:12.642120    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:12.666778    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:12.666788    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:12.680493    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:12.680504    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:12.694736    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:12.694746    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:12.712886    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:12.712896    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:12.724524    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:12.724538    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:12.739509    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:12.739519    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:12.756570    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:12.756582    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:12.768179    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:12.768190    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:12.772853    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:12.772859    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:12.811086    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:12.811099    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:12.822334    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:12.822346    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:12.833802    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:12.833812    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:12.871830    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:12.871843    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:12.906994    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:12.907007    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:13.443818    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:13.444040    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:13.471997    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:13.472101    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:13.487872    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:13.487948    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:13.501443    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:13.501515    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:13.512949    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:13.513020    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:13.523100    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:13.523170    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:13.533720    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:13.533798    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:13.544106    4006 logs.go:276] 0 containers: []
	W0815 16:58:13.544117    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:13.544173    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:13.554129    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:13.554145    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:13.554149    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:13.565931    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:13.565942    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:13.580690    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:13.580705    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:13.598773    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:13.598782    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:13.603814    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:13.603824    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:13.616135    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:13.616145    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:13.631274    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:13.631285    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:13.646016    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:13.646027    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:13.657789    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:13.657800    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:13.670066    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:13.670077    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:13.695518    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:13.695527    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:13.707294    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:13.707306    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:13.742162    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:13.742175    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:15.422909    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:16.279230    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:20.424569    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:20.424782    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:20.442448    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:20.442539    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:20.457350    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:20.457419    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:20.477221    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:20.477283    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:20.491853    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:20.491933    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:20.503683    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:20.503760    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:20.514596    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:20.514654    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:20.525115    4145 logs.go:276] 0 containers: []
	W0815 16:58:20.525128    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:20.525175    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:20.535746    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:20.535764    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:20.535770    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:20.570876    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:20.570888    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:20.582085    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:20.582098    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:20.605082    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:20.605095    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:20.623900    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:20.623914    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:20.639337    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:20.639350    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:20.650611    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:20.650623    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:20.664933    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:20.664947    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:20.678907    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:20.678918    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:20.690649    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:20.690659    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:20.701954    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:20.701964    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:20.713085    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:20.713096    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:20.725255    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:20.725265    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:20.763923    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:20.763934    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:20.768253    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:20.768260    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:20.782463    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:20.782474    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:20.821953    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:20.821964    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:21.281711    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:21.281997    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:21.310675    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:21.310784    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:21.332488    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:21.332583    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:21.346116    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:21.346195    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:21.358109    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:21.358178    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:21.368291    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:21.368363    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:21.379093    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:21.379168    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:21.389610    4006 logs.go:276] 0 containers: []
	W0815 16:58:21.389620    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:21.389679    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:21.399756    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:21.399769    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:21.399774    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:21.434086    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:21.434096    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:21.448206    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:21.448220    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:21.460122    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:21.460136    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:21.474784    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:21.474798    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:21.486761    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:21.486775    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:21.512192    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:21.512203    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:21.516698    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:21.516705    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:21.554072    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:21.554082    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:21.568201    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:21.568214    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:21.580075    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:21.580084    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:21.597987    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:21.598000    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:21.609535    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:21.609546    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:23.348290    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:24.125032    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:28.350652    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:28.350834    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:28.369713    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:28.369805    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:28.384534    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:28.384614    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:28.396822    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:28.396891    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:28.411912    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:28.411982    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:28.422808    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:28.422883    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:28.435928    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:28.436000    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:28.445975    4145 logs.go:276] 0 containers: []
	W0815 16:58:28.445987    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:28.446045    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:28.457056    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:28.457073    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:28.457079    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:28.461369    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:28.461377    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:28.475039    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:28.475049    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:28.486130    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:28.486144    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:28.520676    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:28.520687    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:28.535204    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:28.535218    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:28.575582    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:28.575597    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:28.590052    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:28.590063    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:28.604424    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:28.604436    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:28.619710    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:28.619720    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:28.631147    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:28.631157    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:28.654490    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:28.654498    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:28.690543    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:28.690551    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:28.704947    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:28.704959    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:28.722617    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:28.722626    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:28.734707    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:28.734722    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:28.748212    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:28.748224    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:31.261987    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:29.127467    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:29.127628    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:29.138891    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:29.138966    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:29.149416    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:29.149486    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:29.165353    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:29.165419    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:29.176491    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:29.176548    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:29.187040    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:29.187102    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:29.197945    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:29.198017    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:29.207619    4006 logs.go:276] 0 containers: []
	W0815 16:58:29.207630    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:29.207675    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:29.218004    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:29.218019    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:29.218025    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:29.232040    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:29.232053    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:29.243418    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:29.243431    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:29.260774    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:29.260788    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:29.277482    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:29.277493    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:29.302209    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:29.302224    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:29.314302    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:29.314316    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:29.349108    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:29.349116    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:29.382429    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:29.382443    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:29.394559    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:29.394571    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:29.410476    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:29.410487    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:29.423055    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:29.423067    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:29.428063    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:29.428073    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:31.943166    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:36.264556    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:36.264970    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:36.306732    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:36.306861    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:36.326831    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:36.326932    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:36.341643    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:36.341725    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:36.354357    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:36.354438    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:36.365560    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:36.365622    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:36.376228    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:36.376301    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:36.387357    4145 logs.go:276] 0 containers: []
	W0815 16:58:36.387368    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:36.387430    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:36.398203    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:36.398222    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:36.398228    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:36.411333    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:36.411347    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:36.427197    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:36.427208    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:36.450481    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:36.450499    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:36.487330    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:36.487342    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:36.522544    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:36.522558    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:36.534181    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:36.534193    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:36.545573    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:36.545586    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:36.560683    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:36.560695    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:36.575616    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:36.575635    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:36.589562    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:36.589572    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:36.605083    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:36.605097    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:36.618925    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:36.618937    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:36.636385    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:36.636398    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:36.662753    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:36.662768    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:36.673962    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:36.673972    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:36.678168    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:36.678175    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:36.945556    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:36.945682    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:36.957069    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:36.957147    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:36.967703    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:36.967774    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:36.978088    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:36.978155    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:36.988368    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:36.988428    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:37.001886    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:37.001953    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:37.017253    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:37.017322    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:37.027335    4006 logs.go:276] 0 containers: []
	W0815 16:58:37.027347    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:37.027405    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:37.038133    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:37.038146    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:37.038151    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:37.049227    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:37.049240    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:37.065261    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:37.065274    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:37.082104    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:37.082117    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:37.093788    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:37.093798    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:37.130353    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:37.130364    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:37.145202    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:37.145215    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:37.165322    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:37.165334    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:37.177301    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:37.177312    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:37.194690    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:37.194700    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:37.218362    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:37.218373    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:37.229786    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:37.229796    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:37.264673    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:37.264684    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:39.217690    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:39.771431    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:44.220184    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:44.220453    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:44.245514    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:44.245631    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:44.263973    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:44.264057    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:44.281092    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:44.281159    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:44.292222    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:44.292293    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:44.305875    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:44.305941    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:44.317637    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:44.317719    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:44.328196    4145 logs.go:276] 0 containers: []
	W0815 16:58:44.328210    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:44.328269    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:44.338547    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:44.338567    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:44.338573    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:44.349420    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:44.349431    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:44.383677    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:44.383687    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:44.397951    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:44.397965    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:44.412564    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:44.412574    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:44.424635    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:44.424650    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:44.436983    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:44.436995    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:44.449604    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:44.449616    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:44.487117    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:44.487131    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:44.510135    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:44.510143    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:44.548749    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:44.548762    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:44.562719    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:44.562731    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:44.576793    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:44.576803    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:44.589132    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:44.589145    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:44.593885    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:44.593893    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:44.613104    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:44.613115    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:44.628461    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:44.628475    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:47.149794    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:44.773870    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:44.774044    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:44.790263    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:44.790351    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:44.816043    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:44.816115    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:44.827674    4006 logs.go:276] 2 containers: [8855e6664bde 656a333c1c75]
	I0815 16:58:44.827744    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:44.838378    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:44.838451    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:44.848741    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:44.848811    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:44.859518    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:44.859589    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:44.869609    4006 logs.go:276] 0 containers: []
	W0815 16:58:44.869620    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:44.869679    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:44.880229    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:44.880243    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:44.880247    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:44.949219    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:44.949233    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:44.964549    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:44.964564    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:44.976437    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:44.976448    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:44.987984    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:44.987998    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:45.002544    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:45.002555    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:45.027073    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:45.027082    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:45.062242    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:45.062255    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:45.069365    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:45.069374    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:45.085429    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:45.085439    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:45.100564    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:45.100576    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:45.112996    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:45.113007    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:45.133710    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:45.133720    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:47.647011    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:52.152307    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:52.152788    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:52.199986    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:52.200120    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:52.219308    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:52.219397    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:52.233229    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:52.233300    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:52.245935    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:52.246024    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:52.256834    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:52.256907    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:52.267494    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:52.267563    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:52.279372    4145 logs.go:276] 0 containers: []
	W0815 16:58:52.279384    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:52.279448    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:52.290136    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:52.290155    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:52.290162    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:52.310255    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:52.310266    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:52.329948    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:52.329963    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:52.342387    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:52.342399    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:52.354435    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:52.354448    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:52.372703    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:52.372717    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:52.398533    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:52.398547    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:52.436317    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:52.436343    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:52.452534    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:52.452546    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:52.491573    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:52.491589    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:52.505905    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:52.505916    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:52.519693    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:52.519702    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:52.536627    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:52.536642    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:52.549809    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:52.549823    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:52.588418    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:52.588431    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:52.592815    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:52.592822    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:52.604299    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:52.604313    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:52.649310    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:52.649399    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:52.660116    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:58:52.660182    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:52.671664    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:58:52.671743    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:52.682229    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:58:52.682302    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:52.692325    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:58:52.692387    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:52.703585    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:58:52.703663    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:52.713861    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:58:52.713932    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:52.724479    4006 logs.go:276] 0 containers: []
	W0815 16:58:52.724496    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:52.724556    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:52.736264    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:58:52.736279    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:58:52.736285    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:58:52.747128    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:58:52.747140    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:58:52.761807    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:58:52.761818    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:58:52.773653    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:58:52.773668    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:58:52.791821    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:58:52.791831    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:52.803839    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:52.803850    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:52.839288    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:58:52.839302    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:58:52.853960    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:58:52.853973    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:58:52.865097    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:58:52.865108    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:58:52.876662    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:52.876675    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:52.901280    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:52.901291    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:52.934802    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:52.934810    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:52.939122    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:58:52.939129    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:58:52.953329    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:58:52.953344    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:58:52.964694    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:58:52.964707    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:58:55.118169    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:55.476814    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:00.120884    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:00.121196    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:00.144813    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:00.144938    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:00.160447    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:00.160524    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:00.173011    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:00.173087    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:00.184146    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:00.184221    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:00.198581    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:00.198650    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:00.209199    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:00.209265    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:00.219866    4145 logs.go:276] 0 containers: []
	W0815 16:59:00.219875    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:00.219929    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:00.231584    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:00.231602    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:00.231607    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:00.267732    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:00.267741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:00.281794    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:00.281805    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:00.307816    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:00.307826    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:00.319886    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:00.319899    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:00.331733    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:00.331747    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:00.345762    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:00.345774    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:00.384160    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:00.384173    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:00.396198    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:00.396212    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:00.411351    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:00.411361    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:00.423356    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:00.423370    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:00.427811    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:00.427819    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:00.446131    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:00.446144    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:00.457440    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:00.457453    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:00.472136    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:00.472152    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:00.514328    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:00.514341    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:00.530891    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:00.530900    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:03.057319    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:00.478475    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:00.478550    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:00.490387    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:00.490459    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:00.506067    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:00.506135    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:00.517969    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:00.518038    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:00.529953    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:00.530076    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:00.541977    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:00.542047    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:00.553461    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:00.553536    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:00.564517    4006 logs.go:276] 0 containers: []
	W0815 16:59:00.564532    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:00.564588    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:00.575178    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:00.575194    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:00.575201    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:00.588791    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:00.588801    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:00.606720    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:00.606729    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:00.621788    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:00.621800    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:00.633442    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:00.633454    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:00.645165    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:00.645175    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:00.656612    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:00.656623    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:00.691749    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:00.691763    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:00.729476    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:00.729491    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:00.740958    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:00.740968    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:00.764341    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:00.764349    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:00.768813    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:00.768822    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:00.780697    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:00.780708    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:00.792538    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:00.792550    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:00.807691    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:00.807709    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:03.332706    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:08.059701    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:08.059902    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:08.083407    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:08.083488    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:08.094552    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:08.094623    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:08.105027    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:08.105099    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:08.115176    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:08.115254    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:08.150952    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:08.151026    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:08.172979    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:08.173045    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:08.182979    4145 logs.go:276] 0 containers: []
	W0815 16:59:08.182989    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:08.183042    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:08.193864    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:08.193884    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:08.193889    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:08.205803    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:08.205815    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:08.217582    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:08.217594    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:08.231786    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:08.231796    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:08.243438    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:08.243448    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:08.282558    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:08.282576    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:08.286764    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:08.286770    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:08.327908    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:08.327922    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:08.335127    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:08.335311    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:08.347333    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:08.347400    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:08.358843    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:08.358908    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:08.370506    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:08.370583    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:08.382312    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:08.382394    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:08.393474    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:08.393545    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:08.404910    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:08.404987    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:08.416308    4006 logs.go:276] 0 containers: []
	W0815 16:59:08.416319    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:08.416382    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:08.427820    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:08.427838    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:08.427843    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:08.465924    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:08.465938    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:08.471447    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:08.471459    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:08.511823    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:08.511835    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:08.526686    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:08.526696    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:08.544998    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:08.545008    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:08.556364    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:08.556376    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:08.580556    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:08.580564    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:08.592053    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:08.592064    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:08.603976    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:08.603986    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:08.621906    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:08.621917    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:08.638631    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:08.638641    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:08.653530    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:08.653540    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:08.664968    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:08.664978    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:08.676968    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:08.676979    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:08.343542    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:08.343553    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:08.363058    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:08.363070    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:08.375655    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:08.375667    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:08.413119    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:08.413132    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:08.429719    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:08.429730    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:08.443016    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:08.443029    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:08.468762    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:08.468775    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:08.484597    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:08.484609    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:08.496821    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:08.496834    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:11.013517    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:11.190532    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:16.014145    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:16.014338    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:16.029874    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:16.029960    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:16.042880    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:16.042956    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:16.054107    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:16.054182    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:16.064230    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:16.064301    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:16.074677    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:16.074745    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:16.085563    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:16.085634    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:16.096204    4145 logs.go:276] 0 containers: []
	W0815 16:59:16.096214    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:16.096273    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:16.106890    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:16.106907    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:16.106912    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:16.120865    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:16.120878    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:16.135010    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:16.135024    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:16.146430    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:16.146443    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:16.182541    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:16.182548    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:16.197498    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:16.197511    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:16.209986    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:16.209996    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:16.253616    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:16.253632    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:16.272960    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:16.272972    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:16.285928    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:16.285943    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:16.307861    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:16.307880    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:16.327186    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:16.327202    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:16.340023    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:16.340037    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:16.378045    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:16.378059    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:16.390695    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:16.390707    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:16.403059    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:16.403069    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:16.426995    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:16.427010    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:16.192825    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:16.192927    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:16.208566    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:16.208642    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:16.220724    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:16.220795    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:16.232579    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:16.232716    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:16.243913    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:16.243980    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:16.254735    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:16.254797    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:16.266202    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:16.266269    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:16.277131    4006 logs.go:276] 0 containers: []
	W0815 16:59:16.277142    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:16.277200    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:16.288641    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:16.288661    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:16.288666    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:16.301944    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:16.301956    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:16.314129    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:16.314141    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:16.318798    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:16.318808    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:16.333821    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:16.333838    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:16.347173    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:16.347185    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:16.365215    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:16.365228    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:16.404328    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:16.404338    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:16.419437    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:16.419448    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:16.432497    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:16.432508    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:16.466509    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:16.466523    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:16.489765    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:16.489774    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:16.501218    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:16.501231    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:16.513482    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:16.513493    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:16.528455    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:16.528467    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:18.932744    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:19.042767    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:23.935162    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:23.935338    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:23.949703    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:23.949789    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:23.962130    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:23.962206    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:23.976055    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:23.976142    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:23.986921    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:23.986992    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:23.997711    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:23.997778    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:24.008123    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:24.008193    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:24.018488    4145 logs.go:276] 0 containers: []
	W0815 16:59:24.018501    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:24.018564    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:24.029455    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:24.029471    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:24.029476    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:24.043660    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:24.043673    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:24.058686    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:24.058698    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:24.071408    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:24.071417    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:24.083859    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:24.083871    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:24.122741    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:24.122762    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:24.161328    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:24.161345    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:24.179948    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:24.179964    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:24.192351    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:24.192365    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:24.218421    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:24.218434    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:24.233067    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:24.233082    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:24.256787    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:24.256798    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:24.275994    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:24.276007    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:24.291218    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:24.291231    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:24.295575    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:24.295588    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:24.337891    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:24.337903    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:24.356692    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:24.356704    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:26.878486    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:24.043915    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:24.044015    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:24.058726    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:24.058788    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:24.070192    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:24.070266    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:24.081911    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:24.081989    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:24.093265    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:24.093335    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:24.104449    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:24.104517    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:24.116853    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:24.116929    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:24.127674    4006 logs.go:276] 0 containers: []
	W0815 16:59:24.127686    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:24.127744    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:24.140063    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:24.140081    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:24.140086    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:24.155810    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:24.155826    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:24.169705    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:24.169717    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:24.185480    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:24.185494    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:24.190652    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:24.190663    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:24.203835    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:24.203846    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:24.216382    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:24.216394    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:24.229342    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:24.229357    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:24.248525    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:24.248538    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:24.284936    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:24.284952    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:24.325439    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:24.325450    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:24.337433    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:24.337444    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:24.349664    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:24.349676    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:24.365823    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:24.365835    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:24.379974    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:24.379984    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:26.906544    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:31.880901    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:31.881314    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:31.917303    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:31.917425    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:31.941939    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:31.942021    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:31.956160    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:31.956238    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:31.968177    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:31.968263    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:31.979389    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:31.979463    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:31.991396    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:31.991489    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:32.002420    4145 logs.go:276] 0 containers: []
	W0815 16:59:32.002432    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:32.002493    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:32.013566    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:32.013586    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:32.013592    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:32.025867    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:32.025878    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:32.038732    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:32.038746    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:32.052029    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:32.052055    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:32.093212    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:32.093233    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:32.112242    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:32.112259    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:32.136673    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:32.136680    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:32.162258    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:32.162268    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:32.166676    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:32.166683    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:32.205521    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:32.205535    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:32.221637    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:32.221654    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:32.234046    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:32.234060    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:32.245759    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:32.245774    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:32.258305    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:32.258317    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:32.275219    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:32.275231    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:32.293938    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:32.293950    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:32.333891    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:32.333904    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:31.908802    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:31.909065    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:31.932852    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:31.932949    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:31.950930    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:31.951010    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:31.968481    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:31.968519    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:31.981143    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:31.981199    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:31.992975    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:31.993027    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:32.005543    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:32.005611    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:32.022565    4006 logs.go:276] 0 containers: []
	W0815 16:59:32.022577    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:32.022643    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:32.034452    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:32.034470    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:32.034478    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:32.047551    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:32.047562    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:32.064483    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:32.064496    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:32.081235    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:32.081247    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:32.121098    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:32.121110    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:32.136644    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:32.136655    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:32.150785    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:32.150795    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:32.176663    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:32.176679    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:32.198502    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:32.198517    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:32.213354    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:32.213368    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:32.225851    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:32.225863    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:32.238991    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:32.239003    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:32.253612    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:32.253624    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:32.290962    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:32.290977    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:32.296527    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:32.296535    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:34.850953    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:34.818305    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:39.851597    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:39.851700    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:39.871168    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:39.871256    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:39.890346    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:39.890453    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:39.908624    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:39.908693    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:39.920119    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:39.920152    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:39.931696    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:39.931764    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:39.943610    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:39.943690    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:39.959817    4145 logs.go:276] 0 containers: []
	W0815 16:59:39.959831    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:39.959894    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:39.976576    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:39.976598    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:39.976604    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:39.981584    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:39.981597    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:40.019589    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:40.019602    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:40.034833    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:40.034844    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:40.076505    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:40.076528    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:40.092994    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:40.093006    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:40.104847    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:40.104861    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:40.128370    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:40.128389    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:40.143177    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:40.143193    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:40.181844    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:40.181857    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:40.201745    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:40.201765    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:40.214775    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:40.214784    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:40.230681    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:40.230693    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:40.242780    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:40.242795    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:40.254547    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:40.254562    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:40.265442    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:40.265454    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:40.277154    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:40.277169    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:42.802255    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:39.821219    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:39.821596    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:39.851529    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:39.851666    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:39.875572    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:39.875655    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:39.890503    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:39.890544    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:39.903215    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:39.903281    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:39.919702    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:39.919779    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:39.931976    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:39.932011    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:39.948372    4006 logs.go:276] 0 containers: []
	W0815 16:59:39.948385    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:39.948449    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:39.966818    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:39.966838    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:39.966844    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:39.984327    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:39.984338    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:39.997024    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:39.997038    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:40.014652    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:40.014668    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:40.027207    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:40.027219    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:40.063910    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:40.063922    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:40.105601    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:40.105610    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:40.120688    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:40.120703    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:40.134161    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:40.134173    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:40.150397    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:40.150409    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:40.155153    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:40.155161    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:40.167137    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:40.167148    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:40.180224    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:40.180236    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:40.193360    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:40.193373    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:40.212561    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:40.212578    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:42.741505    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:47.804731    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:47.804799    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:47.820508    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:47.820583    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:47.832425    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:47.832498    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:47.844031    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:47.844105    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:47.855301    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:47.855374    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:47.866511    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:47.866581    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:47.877643    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:47.877713    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:47.888767    4145 logs.go:276] 0 containers: []
	W0815 16:59:47.888780    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:47.888838    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:47.900264    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:47.900284    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:47.900290    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:47.915917    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:47.915929    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:47.932485    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:47.932499    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:47.972930    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:47.972942    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:48.011031    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:48.011044    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:48.056509    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:48.056522    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:48.071154    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:48.071165    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:48.090513    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:48.090525    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:48.113178    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:48.113191    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:48.125767    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:48.125784    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:48.149060    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:48.149078    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:48.153725    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:48.153733    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:48.164997    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:48.165012    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:48.176723    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:48.176735    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:48.188598    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:48.188610    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:48.206792    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:48.206804    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:48.221230    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:48.221244    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:47.744116    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:47.744355    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:47.769842    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:47.769964    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:47.786872    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:47.786952    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:47.800964    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:47.801047    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:47.812963    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:47.813039    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:47.827356    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:47.827426    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:47.839167    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:47.839239    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:47.850497    4006 logs.go:276] 0 containers: []
	W0815 16:59:47.850509    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:47.850567    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:47.862330    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:47.862348    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:47.862353    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:47.877376    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:47.877389    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:47.893737    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:47.893748    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:47.907350    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:47.907362    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:47.926211    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:47.926223    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:47.938974    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:47.938988    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:47.973632    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:47.973641    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:48.011466    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:48.011474    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:48.023710    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:48.023721    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:48.038823    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:48.038836    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:48.063293    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:48.063306    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:48.075902    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:48.075915    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:48.080915    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:48.080926    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:48.093432    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:48.093443    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:48.106246    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:48.106258    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:50.744764    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:50.622461    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:55.745943    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:55.746034    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:55.758194    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:55.758266    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:55.769445    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:55.769519    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:55.786009    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:55.786080    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:55.801879    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:55.801955    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:55.813302    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:55.813380    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:55.824546    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:55.824607    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:55.835745    4145 logs.go:276] 0 containers: []
	W0815 16:59:55.835760    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:55.835818    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:55.847144    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:55.847175    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:55.847181    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:55.867339    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:55.867351    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:55.882541    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:55.882559    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:55.887218    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:55.887227    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:55.902319    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:55.902331    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:55.920963    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:55.920978    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:55.933804    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:55.933815    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:55.973180    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:55.973194    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:55.985478    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:55.985491    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:56.009488    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:56.009506    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:56.053704    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:56.053716    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:56.064465    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:56.064476    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:56.084974    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:56.084985    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:56.096504    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:56.096515    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:56.132177    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:56.132194    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:56.147116    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:56.147130    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:56.158605    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:56.158618    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:55.625114    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:55.625402    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:55.650445    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 16:59:55.650549    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:55.666720    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 16:59:55.666796    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:55.680745    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 16:59:55.680826    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:55.692189    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 16:59:55.692258    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:55.702919    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 16:59:55.702991    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:55.723565    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 16:59:55.723632    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:55.737673    4006 logs.go:276] 0 containers: []
	W0815 16:59:55.737687    4006 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:55.737745    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:55.748495    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 16:59:55.748521    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 16:59:55.748540    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 16:59:55.767137    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 16:59:55.767149    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 16:59:55.780003    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 16:59:55.780015    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 16:59:55.792702    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 16:59:55.792712    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 16:59:55.805714    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 16:59:55.805727    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 16:59:55.824474    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 16:59:55.824487    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 16:59:55.840120    4006 logs.go:123] Gathering logs for container status ...
	I0815 16:59:55.840132    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:55.852801    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:55.852814    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:55.858106    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 16:59:55.858128    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 16:59:55.872955    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 16:59:55.872967    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 16:59:55.889409    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 16:59:55.889421    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 16:59:55.905312    4006 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:55.905320    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:55.930509    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:55.930522    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:55.965534    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:55.965544    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:56.002416    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 16:59:56.002427    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 16:59:58.519276    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:58.671647    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:03.521613    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:03.521741    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:03.533041    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:03.533126    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:03.544063    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:03.544134    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:03.554504    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:03.554583    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:03.565067    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:03.565136    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:03.576365    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:03.576435    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:03.590463    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:03.590536    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:03.602404    4006 logs.go:276] 0 containers: []
	W0815 17:00:03.602415    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:03.602473    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:03.615150    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:03.615168    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:03.615173    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:03.619656    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:03.619662    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:03.637845    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:03.637859    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:03.650207    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:03.650218    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:03.662513    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:03.662524    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:03.677488    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:03.677499    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:03.714704    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:03.714718    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:03.727989    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:03.727999    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:03.767022    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:03.767031    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:03.780100    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:03.780111    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:03.673959    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:03.674066    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:03.685968    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 17:00:03.686044    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:03.696955    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 17:00:03.697023    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:03.707670    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 17:00:03.707733    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:03.719342    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 17:00:03.719438    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:03.731356    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 17:00:03.731433    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:03.742852    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 17:00:03.742932    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:03.754357    4145 logs.go:276] 0 containers: []
	W0815 17:00:03.754369    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:03.754434    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:03.765815    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 17:00:03.765854    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 17:00:03.765862    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 17:00:03.781607    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:03.781617    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:03.785997    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:03.786008    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:03.828322    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 17:00:03.828334    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 17:00:03.844999    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:03.845012    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:03.885517    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 17:00:03.885531    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 17:00:03.899347    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 17:00:03.899359    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 17:00:03.911570    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 17:00:03.911580    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 17:00:03.924334    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:03.924346    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:03.947472    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 17:00:03.947485    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 17:00:03.985907    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 17:00:03.985921    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 17:00:03.997152    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 17:00:03.997165    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 17:00:04.012000    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 17:00:04.012009    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 17:00:04.029683    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 17:00:04.029699    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 17:00:04.043518    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 17:00:04.043530    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 17:00:04.054675    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 17:00:04.054688    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 17:00:04.065508    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:00:04.065519    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:06.579896    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:03.798990    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:03.799003    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:03.824068    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:03.824085    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:03.836942    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:03.836954    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:03.852064    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:03.852075    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:03.866549    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:03.866561    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:06.383791    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:11.582476    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:11.582568    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:11.594227    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 17:00:11.594313    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:11.605521    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 17:00:11.605600    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:11.616957    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 17:00:11.617032    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:11.630038    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 17:00:11.630127    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:11.641182    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 17:00:11.641250    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:11.653107    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 17:00:11.653174    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:11.664050    4145 logs.go:276] 0 containers: []
	W0815 17:00:11.664063    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:11.664121    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:11.675065    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 17:00:11.675083    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:11.675088    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:11.712760    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 17:00:11.712774    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 17:00:11.732347    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 17:00:11.732358    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 17:00:11.744307    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 17:00:11.744319    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 17:00:11.768630    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:11.768641    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:11.806567    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 17:00:11.806576    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 17:00:11.818156    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:11.818168    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:11.841863    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 17:00:11.841872    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 17:00:11.853671    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 17:00:11.853681    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 17:00:11.867479    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 17:00:11.867488    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 17:00:11.878813    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:00:11.878823    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:11.890717    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:11.890727    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:11.894806    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 17:00:11.894814    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 17:00:11.910160    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 17:00:11.910174    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 17:00:11.921253    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 17:00:11.921264    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 17:00:11.936242    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 17:00:11.936255    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 17:00:11.949852    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 17:00:11.949862    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 17:00:11.385272    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:11.385376    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:11.396842    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:11.396908    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:11.408165    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:11.408241    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:11.423125    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:11.423188    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:11.433962    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:11.434031    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:11.444531    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:11.444603    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:11.455617    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:11.455686    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:11.465848    4006 logs.go:276] 0 containers: []
	W0815 17:00:11.465863    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:11.465924    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:11.476962    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:11.476979    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:11.476985    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:11.504863    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:11.504875    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:11.519452    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:11.519465    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:11.531288    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:11.531299    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:11.535642    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:11.535648    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:11.547435    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:11.547446    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:11.559665    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:11.559675    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:11.571002    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:11.571012    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:11.585771    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:11.585784    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:11.627218    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:11.627232    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:11.640188    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:11.640205    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:11.653070    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:11.653081    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:11.675720    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:11.675728    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:11.701816    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:11.701833    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:11.714853    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:11.714864    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:14.488401    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:14.253516    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:19.490796    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:19.490896    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:19.501710    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 17:00:19.501793    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:19.512426    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 17:00:19.512498    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:19.523654    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 17:00:19.523727    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:19.535470    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 17:00:19.535564    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:19.551172    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 17:00:19.551244    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:19.566243    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 17:00:19.566314    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:19.577765    4145 logs.go:276] 0 containers: []
	W0815 17:00:19.577777    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:19.577838    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:19.589840    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 17:00:19.589855    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 17:00:19.589859    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 17:00:19.604587    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 17:00:19.604596    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 17:00:19.616755    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:19.616772    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:19.641772    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:19.641782    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:19.681561    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 17:00:19.681572    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 17:00:19.719865    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 17:00:19.719882    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 17:00:19.731452    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 17:00:19.731464    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 17:00:19.742825    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:00:19.742841    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:19.754965    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:19.754976    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:19.759408    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:19.759417    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:19.794033    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 17:00:19.794047    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 17:00:19.809597    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 17:00:19.809611    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 17:00:19.824845    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 17:00:19.824855    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 17:00:19.838702    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 17:00:19.838714    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 17:00:19.850168    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 17:00:19.850181    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 17:00:19.868068    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 17:00:19.868082    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 17:00:19.883238    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 17:00:19.883247    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 17:00:22.396786    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:19.256371    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:19.256711    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:19.298279    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:19.298417    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:19.320278    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:19.320372    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:19.335830    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:19.335912    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:19.351440    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:19.351506    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:19.361701    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:19.361783    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:19.372296    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:19.372369    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:19.383372    4006 logs.go:276] 0 containers: []
	W0815 17:00:19.383385    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:19.383444    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:19.398771    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:19.398788    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:19.398793    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:19.403293    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:19.403302    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:19.437928    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:19.437940    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:19.455903    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:19.455916    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:19.467666    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:19.467677    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:19.480105    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:19.480117    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:19.515588    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:19.515600    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:19.531302    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:19.531316    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:19.546269    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:19.546283    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:19.559557    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:19.559572    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:19.575354    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:19.575369    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:19.588789    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:19.588801    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:19.601805    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:19.601817    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:19.617290    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:19.617388    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:19.637497    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:19.637508    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:22.164204    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:27.397935    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:27.397974    4145 kubeadm.go:597] duration metric: took 4m4.386610291s to restartPrimaryControlPlane
	W0815 17:00:27.398010    4145 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 17:00:27.398025    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0815 17:00:28.389587    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:00:28.394884    4145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:00:28.398018    4145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:00:28.401182    4145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:00:28.401189    4145 kubeadm.go:157] found existing configuration files:
	
	I0815 17:00:28.401217    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf
	I0815 17:00:28.403757    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:00:28.403787    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:00:28.406489    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf
	I0815 17:00:28.409463    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:00:28.409485    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:00:28.412300    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf
	I0815 17:00:28.414573    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:00:28.414597    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:00:28.417698    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf
	I0815 17:00:28.420245    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:00:28.420266    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:00:28.422678    4145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 17:00:28.439633    4145 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0815 17:00:28.439713    4145 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:00:28.491576    4145 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:00:28.491716    4145 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:00:28.491772    4145 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 17:00:28.542340    4145 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:00:27.167086    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:27.167616    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:27.211673    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:27.211809    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:27.231530    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:27.231624    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:27.248595    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:27.248671    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:27.260853    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:27.260925    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:27.271777    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:27.271845    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:27.282591    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:27.282655    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:27.296318    4006 logs.go:276] 0 containers: []
	W0815 17:00:27.296332    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:27.296397    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:27.307491    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:27.307509    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:27.307515    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:27.322106    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:27.322117    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:27.334097    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:27.334109    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:27.346876    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:27.346887    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:27.371317    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:27.371331    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:27.407904    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:27.407922    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:27.413446    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:27.413458    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:27.454606    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:27.454618    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:27.479316    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:27.479333    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:27.491766    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:27.491777    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:27.504715    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:27.504729    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:27.519503    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:27.519515    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:27.533335    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:27.533347    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:27.548045    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:27.548058    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:27.566558    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:27.566575    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:28.546502    4145 out.go:235]   - Generating certificates and keys ...
	I0815 17:00:28.546534    4145 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:00:28.546566    4145 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:00:28.546611    4145 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 17:00:28.546646    4145 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 17:00:28.546677    4145 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 17:00:28.546716    4145 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 17:00:28.546747    4145 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 17:00:28.546821    4145 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 17:00:28.546871    4145 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 17:00:28.546934    4145 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 17:00:28.546954    4145 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 17:00:28.546980    4145 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:00:28.664170    4145 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:00:28.785538    4145 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:00:28.831739    4145 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:00:28.951096    4145 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:00:28.981713    4145 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:00:28.982130    4145 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:00:28.982199    4145 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:00:29.068689    4145 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:00:29.072889    4145 out.go:235]   - Booting up control plane ...
	I0815 17:00:29.072935    4145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:00:29.073042    4145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:00:29.073140    4145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:00:29.076031    4145 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:00:29.076945    4145 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 17:00:33.079120    4145 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001633 seconds
	I0815 17:00:33.079180    4145 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:00:33.082794    4145 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:00:30.081526    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:33.595178    4145 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:00:33.595380    4145 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-889000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:00:34.098829    4145 kubeadm.go:310] [bootstrap-token] Using token: 2x6pd0.lf3zfx9c874ubs97
	I0815 17:00:34.105114    4145 out.go:235]   - Configuring RBAC rules ...
	I0815 17:00:34.105166    4145 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:00:34.105219    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:00:34.107250    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:00:34.108777    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:00:34.109655    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:00:34.110485    4145 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:00:34.114816    4145 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:00:34.276315    4145 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:00:34.502743    4145 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:00:34.503217    4145 kubeadm.go:310] 
	I0815 17:00:34.503249    4145 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:00:34.503257    4145 kubeadm.go:310] 
	I0815 17:00:34.503299    4145 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:00:34.503304    4145 kubeadm.go:310] 
	I0815 17:00:34.503317    4145 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:00:34.503346    4145 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:00:34.503368    4145 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:00:34.503370    4145 kubeadm.go:310] 
	I0815 17:00:34.503396    4145 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:00:34.503399    4145 kubeadm.go:310] 
	I0815 17:00:34.503421    4145 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:00:34.503424    4145 kubeadm.go:310] 
	I0815 17:00:34.503451    4145 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:00:34.503486    4145 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:00:34.503521    4145 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:00:34.503526    4145 kubeadm.go:310] 
	I0815 17:00:34.503566    4145 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:00:34.503633    4145 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:00:34.503638    4145 kubeadm.go:310] 
	I0815 17:00:34.503683    4145 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2x6pd0.lf3zfx9c874ubs97 \
	I0815 17:00:34.503730    4145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e \
	I0815 17:00:34.503740    4145 kubeadm.go:310] 	--control-plane 
	I0815 17:00:34.503742    4145 kubeadm.go:310] 
	I0815 17:00:34.503785    4145 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:00:34.503790    4145 kubeadm.go:310] 
	I0815 17:00:34.503839    4145 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2x6pd0.lf3zfx9c874ubs97 \
	I0815 17:00:34.503888    4145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e 
	I0815 17:00:34.504021    4145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:00:34.504033    4145 cni.go:84] Creating CNI manager for ""
	I0815 17:00:34.504043    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:00:34.507174    4145 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 17:00:34.514050    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 17:00:34.517053    4145 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 17:00:34.522009    4145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:00:34.522056    4145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:00:34.522056    4145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-889000 minikube.k8s.io/updated_at=2024_08_15T17_00_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=stopped-upgrade-889000 minikube.k8s.io/primary=true
	I0815 17:00:34.525240    4145 ops.go:34] apiserver oom_adj: -16
	I0815 17:00:34.552882    4145 kubeadm.go:1113] duration metric: took 30.866584ms to wait for elevateKubeSystemPrivileges
	I0815 17:00:34.566441    4145 kubeadm.go:394] duration metric: took 4m11.568617208s to StartCluster
	I0815 17:00:34.566462    4145 settings.go:142] acquiring lock: {Name:mk3ef55eecb064d007fbd1b55ea891b5b51acd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:00:34.566545    4145 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:00:34.567008    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:00:34.567204    4145 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:00:34.567299    4145 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:00:34.567283    4145 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 17:00:34.567319    4145 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-889000"
	I0815 17:00:34.567334    4145 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-889000"
	I0815 17:00:34.567336    4145 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-889000"
	W0815 17:00:34.567338    4145 addons.go:243] addon storage-provisioner should already be in state true
	I0815 17:00:34.567346    4145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-889000"
	I0815 17:00:34.567351    4145 host.go:66] Checking if "stopped-upgrade-889000" exists ...
	I0815 17:00:34.568230    4145 kapi.go:59] client config for stopped-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 17:00:34.568348    4145 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-889000"
	W0815 17:00:34.568353    4145 addons.go:243] addon default-storageclass should already be in state true
	I0815 17:00:34.568359    4145 host.go:66] Checking if "stopped-upgrade-889000" exists ...
	I0815 17:00:34.570990    4145 out.go:177] * Verifying Kubernetes components...
	I0815 17:00:34.571325    4145 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:00:34.575100    4145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:00:34.575109    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 17:00:34.578951    4145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:00:34.583003    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:00:34.587074    4145 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:00:34.587080    4145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:00:34.587087    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 17:00:34.660638    4145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:00:34.667436    4145 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:00:34.667493    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:00:34.673329    4145 api_server.go:72] duration metric: took 106.113916ms to wait for apiserver process to appear ...
	I0815 17:00:34.673337    4145 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:00:34.673343    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:34.675851    4145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:00:34.707908    4145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:00:35.020799    4145 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 17:00:35.020812    4145 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 17:00:35.083797    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:35.083942    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:35.095425    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:35.095492    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:35.106262    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:35.106334    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:35.117155    4006 logs.go:276] 4 containers: [424cd520c960 9ce6c140fd49 8855e6664bde 656a333c1c75]
	I0815 17:00:35.117236    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:35.128025    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:35.128093    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:35.139060    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:35.139131    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:35.149480    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:35.149552    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:35.159984    4006 logs.go:276] 0 containers: []
	W0815 17:00:35.159996    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:35.160051    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:35.171556    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:35.171572    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:35.171577    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:35.183501    4006 logs.go:123] Gathering logs for coredns [8855e6664bde] ...
	I0815 17:00:35.183515    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8855e6664bde"
	I0815 17:00:35.195196    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:35.195206    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:35.209761    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:35.209771    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:35.214881    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:35.214890    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:35.249318    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:35.249332    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:35.260852    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:35.260865    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:35.286265    4006 logs.go:123] Gathering logs for coredns [656a333c1c75] ...
	I0815 17:00:35.286280    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 656a333c1c75"
	I0815 17:00:35.301003    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:35.301014    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:35.315952    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:35.315963    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:35.334624    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:35.334634    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:35.349445    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:35.349454    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:35.386048    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:35.386065    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:35.407140    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:35.407153    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:35.421936    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:35.421947    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:37.934260    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:39.675511    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:39.675559    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:42.936536    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:42.936704    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:42.947610    4006 logs.go:276] 1 containers: [c4a80ba8e080]
	I0815 17:00:42.947681    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:42.958086    4006 logs.go:276] 1 containers: [467ecbfeafa9]
	I0815 17:00:42.958152    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:42.969089    4006 logs.go:276] 4 containers: [d1bd85ce91e2 d5b496b8fd75 424cd520c960 9ce6c140fd49]
	I0815 17:00:42.969166    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:42.979925    4006 logs.go:276] 1 containers: [92c277b13674]
	I0815 17:00:42.979996    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:42.990380    4006 logs.go:276] 1 containers: [fdf8575c139e]
	I0815 17:00:42.990451    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:43.001321    4006 logs.go:276] 1 containers: [0cf61fd363f9]
	I0815 17:00:43.001386    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:43.012706    4006 logs.go:276] 0 containers: []
	W0815 17:00:43.012718    4006 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:43.012781    4006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:43.023447    4006 logs.go:276] 1 containers: [be4ef6142ab4]
	I0815 17:00:43.023464    4006 logs.go:123] Gathering logs for kube-apiserver [c4a80ba8e080] ...
	I0815 17:00:43.023469    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4a80ba8e080"
	I0815 17:00:43.040304    4006 logs.go:123] Gathering logs for coredns [424cd520c960] ...
	I0815 17:00:43.040317    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 424cd520c960"
	I0815 17:00:43.052747    4006 logs.go:123] Gathering logs for kube-controller-manager [0cf61fd363f9] ...
	I0815 17:00:43.052760    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cf61fd363f9"
	I0815 17:00:43.072832    4006 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:43.072842    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:43.109654    4006 logs.go:123] Gathering logs for storage-provisioner [be4ef6142ab4] ...
	I0815 17:00:43.109666    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4ef6142ab4"
	I0815 17:00:43.122951    4006 logs.go:123] Gathering logs for kube-proxy [fdf8575c139e] ...
	I0815 17:00:43.122963    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf8575c139e"
	I0815 17:00:43.134648    4006 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:43.134658    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:43.158870    4006 logs.go:123] Gathering logs for container status ...
	I0815 17:00:43.158882    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:43.170949    4006 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:43.170961    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:43.206031    4006 logs.go:123] Gathering logs for etcd [467ecbfeafa9] ...
	I0815 17:00:43.206040    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467ecbfeafa9"
	I0815 17:00:43.222987    4006 logs.go:123] Gathering logs for coredns [d1bd85ce91e2] ...
	I0815 17:00:43.222998    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1bd85ce91e2"
	I0815 17:00:43.237058    4006 logs.go:123] Gathering logs for coredns [9ce6c140fd49] ...
	I0815 17:00:43.237069    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce6c140fd49"
	I0815 17:00:43.248460    4006 logs.go:123] Gathering logs for kube-scheduler [92c277b13674] ...
	I0815 17:00:43.248470    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92c277b13674"
	I0815 17:00:43.263035    4006 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:43.263045    4006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:43.267597    4006 logs.go:123] Gathering logs for coredns [d5b496b8fd75] ...
	I0815 17:00:43.267604    4006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5b496b8fd75"
	I0815 17:00:44.675956    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:44.675996    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:45.781444    4006 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:50.782578    4006 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:50.786752    4006 out.go:201] 
	W0815 17:00:50.790740    4006 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0815 17:00:50.790746    4006 out.go:270] * 
	W0815 17:00:50.791231    4006 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:00:50.806736    4006 out.go:201] 
	I0815 17:00:49.676441    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:49.676501    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:54.677051    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:54.677105    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:59.678125    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:59.678176    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:04.679265    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:04.679326    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0815 17:01:05.023521    4145 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0815 17:01:05.027844    4145 out.go:177] * Enabled addons: storage-provisioner
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-08-15 23:52:02 UTC, ends at Fri 2024-08-16 00:01:06 UTC. --
	Aug 16 00:00:44 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 00:00:49 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:49Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 00:00:51 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:51Z" level=error msg="ContainerStats resp: {0x400087aa40 linux}"
	Aug 16 00:00:51 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:51Z" level=error msg="ContainerStats resp: {0x400087abc0 linux}"
	Aug 16 00:00:52 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:52Z" level=error msg="ContainerStats resp: {0x4000359140 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x40005d0240 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x4000773500 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x40007739c0 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x40005d1080 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x40000b7440 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x40005d1700 linux}"
	Aug 16 00:00:53 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:53Z" level=error msg="ContainerStats resp: {0x40005d1b00 linux}"
	Aug 16 00:00:54 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 00:00:59 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:00:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 00:01:03 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:03Z" level=error msg="ContainerStats resp: {0x40007fa700 linux}"
	Aug 16 00:01:03 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:03Z" level=error msg="ContainerStats resp: {0x40005d0080 linux}"
	Aug 16 00:01:04 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 00:01:04 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:04Z" level=error msg="ContainerStats resp: {0x40005d1480 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x4000858f40 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x4000772940 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x4000859940 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x4000859cc0 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x40008d40c0 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x40008d47c0 linux}"
	Aug 16 00:01:05 running-upgrade-853000 cri-dockerd[3135]: time="2024-08-16T00:01:05Z" level=error msg="ContainerStats resp: {0x400087a080 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d1bd85ce91e27       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   b6652c728834a
	d5b496b8fd756       edaa71f2aee88       24 seconds ago      Running             coredns                   2                   9ca16b3bd8c23
	424cd520c960f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9ca16b3bd8c23
	9ce6c140fd49c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b6652c728834a
	fdf8575c139e4       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   656b7a13ae283
	be4ef6142ab44       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   9203af4aca8e3
	0cf61fd363f9c       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   fec875c99763a
	92c277b136746       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   3c377da6bf0f0
	467ecbfeafa92       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   e0e00c82d7daa
	c4a80ba8e0803       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   d275ed147b270
	
	
	==> coredns [424cd520c960] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:35229->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:56097->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:47350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:45792->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:33205->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:48109->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:58739->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:46905->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:45701->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6535412031103298989.957774633042179511. HINFO: read udp 10.244.0.2:48473->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9ce6c140fd49] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:59750->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:33225->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:49471->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:51529->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:54902->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:46442->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:46834->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:54593->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:39968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1306148069476729149.383365996714657596. HINFO: read udp 10.244.0.3:48579->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d1bd85ce91e2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5826984589133138156.9017977821910910928. HINFO: read udp 10.244.0.3:40403->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5826984589133138156.9017977821910910928. HINFO: read udp 10.244.0.3:41296->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5826984589133138156.9017977821910910928. HINFO: read udp 10.244.0.3:58990->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5826984589133138156.9017977821910910928. HINFO: read udp 10.244.0.3:39331->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5826984589133138156.9017977821910910928. HINFO: read udp 10.244.0.3:37591->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5826984589133138156.9017977821910910928. HINFO: read udp 10.244.0.3:47567->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d5b496b8fd75] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1988823670041602736.7541163837047994595. HINFO: read udp 10.244.0.2:46277->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1988823670041602736.7541163837047994595. HINFO: read udp 10.244.0.2:47634->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1988823670041602736.7541163837047994595. HINFO: read udp 10.244.0.2:49266->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1988823670041602736.7541163837047994595. HINFO: read udp 10.244.0.2:49657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1988823670041602736.7541163837047994595. HINFO: read udp 10.244.0.2:36773->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1988823670041602736.7541163837047994595. HINFO: read udp 10.244.0.2:56624->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-853000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-853000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=running-upgrade-853000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T16_56_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-853000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:01:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:56:49 +0000   Thu, 15 Aug 2024 23:56:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:56:49 +0000   Thu, 15 Aug 2024 23:56:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:56:49 +0000   Thu, 15 Aug 2024 23:56:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:56:49 +0000   Thu, 15 Aug 2024 23:56:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-853000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3edfd6be1b2b4a2a94725ceba953c0a5
	  System UUID:                3edfd6be1b2b4a2a94725ceba953c0a5
	  Boot ID:                    1a7abc11-17ca-40f6-87f9-9d55ef35904b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2468f                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m2s
	  kube-system                 coredns-6d4b75cb6d-qq88g                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m2s
	  kube-system                 etcd-running-upgrade-853000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-853000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-853000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-hg2gb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-running-upgrade-853000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-853000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-853000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-853000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-853000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m3s   node-controller  Node running-upgrade-853000 event: Registered Node running-upgrade-853000 in Controller
	
	
	==> dmesg <==
	[  +1.707145] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.079228] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.083219] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.150198] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085438] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.083702] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.640672] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +8.647947] systemd-fstab-generator[1928]: Ignoring "noauto" for root device
	[  +2.734903] systemd-fstab-generator[2206]: Ignoring "noauto" for root device
	[  +0.144401] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.093185] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
	[  +0.090131] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[  +2.882393] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.208968] systemd-fstab-generator[3091]: Ignoring "noauto" for root device
	[  +0.084750] systemd-fstab-generator[3103]: Ignoring "noauto" for root device
	[  +0.079207] systemd-fstab-generator[3114]: Ignoring "noauto" for root device
	[  +0.079502] systemd-fstab-generator[3128]: Ignoring "noauto" for root device
	[  +2.332387] systemd-fstab-generator[3280]: Ignoring "noauto" for root device
	[  +2.289092] systemd-fstab-generator[3642]: Ignoring "noauto" for root device
	[  +0.983678] systemd-fstab-generator[3874]: Ignoring "noauto" for root device
	[ +18.498259] kauditd_printk_skb: 68 callbacks suppressed
	[Aug15 23:53] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:56] systemd-fstab-generator[11986]: Ignoring "noauto" for root device
	[  +5.150772] systemd-fstab-generator[12569]: Ignoring "noauto" for root device
	[  +0.464229] systemd-fstab-generator[12705]: Ignoring "noauto" for root device
	
	
	==> etcd [467ecbfeafa9] <==
	{"level":"info","ts":"2024-08-15T23:56:45.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-15T23:56:45.537Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-15T23:56:45.539Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T23:56:45.539Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T23:56:45.539Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T23:56:45.539Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-15T23:56:45.539Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-15T23:56:45.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-15T23:56:45.925Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-853000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:56:45.925Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:56:45.925Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:56:45.925Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T23:56:45.925Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:56:45.926Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-15T23:56:45.926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:56:45.926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:56:45.927Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:56:45.927Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:56:45.927Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:01:07 up 9 min,  0 users,  load average: 0.25, 0.39, 0.22
	Linux running-upgrade-853000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c4a80ba8e080] <==
	I0815 23:56:47.179760       1 controller.go:611] quota admission added evaluator for: namespaces
	I0815 23:56:47.221983       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0815 23:56:47.222033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:56:47.222042       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:56:47.222115       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0815 23:56:47.222194       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0815 23:56:47.239290       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0815 23:56:47.962438       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0815 23:56:48.129483       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0815 23:56:48.130815       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0815 23:56:48.130828       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 23:56:48.250276       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 23:56:48.264266       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 23:56:48.285716       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0815 23:56:48.287822       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0815 23:56:48.288255       1 controller.go:611] quota admission added evaluator for: endpoints
	I0815 23:56:48.292021       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 23:56:49.254251       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0815 23:56:49.585454       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0815 23:56:49.588453       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0815 23:56:49.595152       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0815 23:56:49.636281       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:57:04.015102       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0815 23:57:04.161757       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0815 23:57:04.548994       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [0cf61fd363f9] <==
	I0815 23:57:03.311229       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0815 23:57:03.311377       1 event.go:294] "Event occurred" object="running-upgrade-853000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-853000 event: Registered Node running-upgrade-853000 in Controller"
	I0815 23:57:03.310993       1 shared_informer.go:262] Caches are synced for crt configmap
	I0815 23:57:03.310997       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0815 23:57:03.311000       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0815 23:57:03.311211       1 shared_informer.go:262] Caches are synced for TTL
	I0815 23:57:03.312122       1 shared_informer.go:262] Caches are synced for PV protection
	I0815 23:57:03.361259       1 shared_informer.go:262] Caches are synced for daemon sets
	I0815 23:57:03.362413       1 shared_informer.go:262] Caches are synced for HPA
	I0815 23:57:03.410676       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0815 23:57:03.410676       1 shared_informer.go:262] Caches are synced for disruption
	I0815 23:57:03.410767       1 disruption.go:371] Sending events to api server.
	I0815 23:57:03.415927       1 shared_informer.go:262] Caches are synced for resource quota
	I0815 23:57:03.425080       1 shared_informer.go:262] Caches are synced for resource quota
	I0815 23:57:03.439154       1 shared_informer.go:262] Caches are synced for namespace
	I0815 23:57:03.450591       1 shared_informer.go:262] Caches are synced for service account
	I0815 23:57:03.461455       1 shared_informer.go:262] Caches are synced for deployment
	I0815 23:57:03.512511       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0815 23:57:03.933997       1 shared_informer.go:262] Caches are synced for garbage collector
	I0815 23:57:04.009535       1 shared_informer.go:262] Caches are synced for garbage collector
	I0815 23:57:04.009635       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0815 23:57:04.018431       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hg2gb"
	I0815 23:57:04.164442       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0815 23:57:04.314375       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qq88g"
	I0815 23:57:04.319227       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2468f"
	
	
	==> kube-proxy [fdf8575c139e] <==
	I0815 23:57:04.537644       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0815 23:57:04.537670       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0815 23:57:04.537679       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0815 23:57:04.547370       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0815 23:57:04.547380       1 server_others.go:206] "Using iptables Proxier"
	I0815 23:57:04.547392       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0815 23:57:04.547475       1 server.go:661] "Version info" version="v1.24.1"
	I0815 23:57:04.547479       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:57:04.547707       1 config.go:317] "Starting service config controller"
	I0815 23:57:04.547714       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0815 23:57:04.547721       1 config.go:226] "Starting endpoint slice config controller"
	I0815 23:57:04.547723       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0815 23:57:04.547985       1 config.go:444] "Starting node config controller"
	I0815 23:57:04.547987       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0815 23:57:04.648663       1 shared_informer.go:262] Caches are synced for node config
	I0815 23:57:04.648733       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0815 23:57:04.648739       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [92c277b13674] <==
	W0815 23:56:47.159759       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 23:56:47.159763       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0815 23:56:47.159775       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 23:56:47.159778       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0815 23:56:47.159810       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 23:56:47.159814       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0815 23:56:47.159848       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 23:56:47.159855       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0815 23:56:47.159867       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 23:56:47.159870       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0815 23:56:47.159892       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:56:47.159899       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0815 23:56:47.159933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 23:56:47.159941       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0815 23:56:47.159968       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 23:56:47.159971       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0815 23:56:47.160000       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 23:56:47.160007       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0815 23:56:48.073011       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 23:56:48.073110       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0815 23:56:48.104114       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 23:56:48.104128       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0815 23:56:48.197267       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 23:56:48.197284       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0815 23:56:48.353027       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-08-15 23:52:02 UTC, ends at Fri 2024-08-16 00:01:07 UTC. --
	Aug 15 23:56:51 running-upgrade-853000 kubelet[12575]: I0815 23:56:51.813845   12575 request.go:601] Waited for 1.141106338s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 15 23:56:51 running-upgrade-853000 kubelet[12575]: E0815 23:56:51.818435   12575 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-853000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-853000"
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: I0815 23:57:03.317576   12575 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: I0815 23:57:03.358592   12575 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: I0815 23:57:03.358894   12575 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: I0815 23:57:03.460207   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1f393fdc-a3ec-41d0-9081-bf28c4ecee42-tmp\") pod \"storage-provisioner\" (UID: \"1f393fdc-a3ec-41d0-9081-bf28c4ecee42\") " pod="kube-system/storage-provisioner"
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: I0815 23:57:03.460244   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2qsg\" (UniqueName: \"kubernetes.io/projected/1f393fdc-a3ec-41d0-9081-bf28c4ecee42-kube-api-access-b2qsg\") pod \"storage-provisioner\" (UID: \"1f393fdc-a3ec-41d0-9081-bf28c4ecee42\") " pod="kube-system/storage-provisioner"
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: E0815 23:57:03.564005   12575 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: E0815 23:57:03.564026   12575 projected.go:192] Error preparing data for projected volume kube-api-access-b2qsg for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 15 23:57:03 running-upgrade-853000 kubelet[12575]: E0815 23:57:03.564063   12575 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/1f393fdc-a3ec-41d0-9081-bf28c4ecee42-kube-api-access-b2qsg podName:1f393fdc-a3ec-41d0-9081-bf28c4ecee42 nodeName:}" failed. No retries permitted until 2024-08-15 23:57:04.064048594 +0000 UTC m=+14.494392870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b2qsg" (UniqueName: "kubernetes.io/projected/1f393fdc-a3ec-41d0-9081-bf28c4ecee42-kube-api-access-b2qsg") pod "storage-provisioner" (UID: "1f393fdc-a3ec-41d0-9081-bf28c4ecee42") : configmap "kube-root-ca.crt" not found
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.020029   12575 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.064569   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676vf\" (UniqueName: \"kubernetes.io/projected/62a71a6f-3858-420a-b78b-1e5943d58845-kube-api-access-676vf\") pod \"kube-proxy-hg2gb\" (UID: \"62a71a6f-3858-420a-b78b-1e5943d58845\") " pod="kube-system/kube-proxy-hg2gb"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.064595   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62a71a6f-3858-420a-b78b-1e5943d58845-xtables-lock\") pod \"kube-proxy-hg2gb\" (UID: \"62a71a6f-3858-420a-b78b-1e5943d58845\") " pod="kube-system/kube-proxy-hg2gb"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.064607   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62a71a6f-3858-420a-b78b-1e5943d58845-lib-modules\") pod \"kube-proxy-hg2gb\" (UID: \"62a71a6f-3858-420a-b78b-1e5943d58845\") " pod="kube-system/kube-proxy-hg2gb"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.064644   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62a71a6f-3858-420a-b78b-1e5943d58845-kube-proxy\") pod \"kube-proxy-hg2gb\" (UID: \"62a71a6f-3858-420a-b78b-1e5943d58845\") " pod="kube-system/kube-proxy-hg2gb"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.321416   12575 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.324275   12575 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.366665   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78d39634-a50e-4247-9816-33479381ce12-config-volume\") pod \"coredns-6d4b75cb6d-2468f\" (UID: \"78d39634-a50e-4247-9816-33479381ce12\") " pod="kube-system/coredns-6d4b75cb6d-2468f"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.366750   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cfef06d-21fb-4939-9ecd-258825c6e7ec-config-volume\") pod \"coredns-6d4b75cb6d-qq88g\" (UID: \"9cfef06d-21fb-4939-9ecd-258825c6e7ec\") " pod="kube-system/coredns-6d4b75cb6d-qq88g"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.366769   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fs44\" (UniqueName: \"kubernetes.io/projected/9cfef06d-21fb-4939-9ecd-258825c6e7ec-kube-api-access-6fs44\") pod \"coredns-6d4b75cb6d-qq88g\" (UID: \"9cfef06d-21fb-4939-9ecd-258825c6e7ec\") " pod="kube-system/coredns-6d4b75cb6d-qq88g"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.366783   12575 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswgl\" (UniqueName: \"kubernetes.io/projected/78d39634-a50e-4247-9816-33479381ce12-kube-api-access-tswgl\") pod \"coredns-6d4b75cb6d-2468f\" (UID: \"78d39634-a50e-4247-9816-33479381ce12\") " pod="kube-system/coredns-6d4b75cb6d-2468f"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.832138   12575 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b6652c728834a729ceb2fbf40ee726b9fe68be4c431ae4b4b6ea79f25047b86c"
	Aug 15 23:57:04 running-upgrade-853000 kubelet[12575]: I0815 23:57:04.833420   12575 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9ca16b3bd8c23d98ce9d56f26bcb495e4532ce85bc071aa1789780f1dfe84b2d"
	Aug 16 00:00:42 running-upgrade-853000 kubelet[12575]: I0816 00:00:42.524411   12575 scope.go:110] "RemoveContainer" containerID="656a333c1c75f9786ee9921a526fd625c30d99084e1169e7edd5bb3a3cbc40d1"
	Aug 16 00:00:42 running-upgrade-853000 kubelet[12575]: I0816 00:00:42.544519   12575 scope.go:110] "RemoveContainer" containerID="8855e6664bde0d84b166613db94525577e99b8f16720450a781f3e50a7a0f33b"
	
	
	==> storage-provisioner [be4ef6142ab4] <==
	I0815 23:57:04.455259       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 23:57:04.459865       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 23:57:04.459878       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 23:57:04.463280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 23:57:04.463747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-853000_b04d8d8d-523d-4e1f-b156-6b3a69910864!
	I0815 23:57:04.463663       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eff91509-e516-4230-9c58-1c7baa4adb55", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-853000_b04d8d8d-523d-4e1f-b156-6b3a69910864 became leader
	I0815 23:57:04.564451       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-853000_b04d8d8d-523d-4e1f-b156-6b3a69910864!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-853000 -n running-upgrade-853000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-853000 -n running-upgrade-853000: exit status 2 (15.623238958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-853000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-853000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-853000
--- FAIL: TestRunningBinaryUpgrade (589.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-559000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-559000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.93213125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-559000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-559000" primary control-plane node in "kubernetes-upgrade-559000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-559000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:54:33.329272    4071 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:54:33.329430    4071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:54:33.329433    4071 out.go:358] Setting ErrFile to fd 2...
	I0815 16:54:33.329436    4071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:54:33.329572    4071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:54:33.330648    4071 out.go:352] Setting JSON to false
	I0815 16:54:33.347054    4071 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3241,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:54:33.347119    4071 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:54:33.353373    4071 out.go:177] * [kubernetes-upgrade-559000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:54:33.360515    4071 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:54:33.360544    4071 notify.go:220] Checking for updates...
	I0815 16:54:33.367455    4071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:54:33.370511    4071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:54:33.373493    4071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:54:33.376448    4071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:54:33.379474    4071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:54:33.382793    4071 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:54:33.382852    4071 config.go:182] Loaded profile config "running-upgrade-853000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:54:33.382896    4071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:54:33.391482    4071 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 16:54:33.394465    4071 start.go:297] selected driver: qemu2
	I0815 16:54:33.394471    4071 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:54:33.394477    4071 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:54:33.396598    4071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:54:33.399535    4071 out.go:177] * Automatically selected the socket_vmnet network
	I0815 16:54:33.402560    4071 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:54:33.402587    4071 cni.go:84] Creating CNI manager for ""
	I0815 16:54:33.402594    4071 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 16:54:33.402617    4071 start.go:340] cluster config:
	{Name:kubernetes-upgrade-559000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:54:33.406172    4071 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:54:33.413479    4071 out.go:177] * Starting "kubernetes-upgrade-559000" primary control-plane node in "kubernetes-upgrade-559000" cluster
	I0815 16:54:33.417522    4071 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:54:33.417539    4071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 16:54:33.417548    4071 cache.go:56] Caching tarball of preloaded images
	I0815 16:54:33.417611    4071 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:54:33.417616    4071 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 16:54:33.417671    4071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kubernetes-upgrade-559000/config.json ...
	I0815 16:54:33.417682    4071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kubernetes-upgrade-559000/config.json: {Name:mk6f8ce88cdfe1e270db800033f71a18cbc6d160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:54:33.418023    4071 start.go:360] acquireMachinesLock for kubernetes-upgrade-559000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:54:33.418063    4071 start.go:364] duration metric: took 31.459µs to acquireMachinesLock for "kubernetes-upgrade-559000"
	I0815 16:54:33.418079    4071 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:54:33.418110    4071 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:54:33.426578    4071 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 16:54:33.441892    4071 start.go:159] libmachine.API.Create for "kubernetes-upgrade-559000" (driver="qemu2")
	I0815 16:54:33.441922    4071 client.go:168] LocalClient.Create starting
	I0815 16:54:33.441985    4071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:54:33.442013    4071 main.go:141] libmachine: Decoding PEM data...
	I0815 16:54:33.442023    4071 main.go:141] libmachine: Parsing certificate...
	I0815 16:54:33.442059    4071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:54:33.442081    4071 main.go:141] libmachine: Decoding PEM data...
	I0815 16:54:33.442092    4071 main.go:141] libmachine: Parsing certificate...
	I0815 16:54:33.442575    4071 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:54:33.594545    4071 main.go:141] libmachine: Creating SSH key...
	I0815 16:54:33.829072    4071 main.go:141] libmachine: Creating Disk image...
	I0815 16:54:33.829085    4071 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:54:33.829349    4071 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:33.839505    4071 main.go:141] libmachine: STDOUT: 
	I0815 16:54:33.839540    4071 main.go:141] libmachine: STDERR: 
	I0815 16:54:33.839621    4071 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2 +20000M
	I0815 16:54:33.848559    4071 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:54:33.848576    4071 main.go:141] libmachine: STDERR: 
	I0815 16:54:33.848594    4071 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:33.848597    4071 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:54:33.848609    4071 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:54:33.848639    4071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:1a:c4:17:5a:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:33.850272    4071 main.go:141] libmachine: STDOUT: 
	I0815 16:54:33.850291    4071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:54:33.850317    4071 client.go:171] duration metric: took 408.38625ms to LocalClient.Create
	I0815 16:54:35.852453    4071 start.go:128] duration metric: took 2.434303916s to createHost
	I0815 16:54:35.852498    4071 start.go:83] releasing machines lock for "kubernetes-upgrade-559000", held for 2.434402791s
	W0815 16:54:35.852537    4071 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:54:35.861418    4071 out.go:177] * Deleting "kubernetes-upgrade-559000" in qemu2 ...
	W0815 16:54:35.874725    4071 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:54:35.874743    4071 start.go:729] Will try again in 5 seconds ...
	I0815 16:54:40.876889    4071 start.go:360] acquireMachinesLock for kubernetes-upgrade-559000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:54:40.877120    4071 start.go:364] duration metric: took 194.625µs to acquireMachinesLock for "kubernetes-upgrade-559000"
	I0815 16:54:40.877168    4071 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:54:40.877286    4071 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 16:54:40.885107    4071 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 16:54:40.910402    4071 start.go:159] libmachine.API.Create for "kubernetes-upgrade-559000" (driver="qemu2")
	I0815 16:54:40.910436    4071 client.go:168] LocalClient.Create starting
	I0815 16:54:40.910507    4071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 16:54:40.910554    4071 main.go:141] libmachine: Decoding PEM data...
	I0815 16:54:40.910571    4071 main.go:141] libmachine: Parsing certificate...
	I0815 16:54:40.910612    4071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 16:54:40.910642    4071 main.go:141] libmachine: Decoding PEM data...
	I0815 16:54:40.910652    4071 main.go:141] libmachine: Parsing certificate...
	I0815 16:54:40.911037    4071 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 16:54:41.064960    4071 main.go:141] libmachine: Creating SSH key...
	I0815 16:54:41.170100    4071 main.go:141] libmachine: Creating Disk image...
	I0815 16:54:41.170111    4071 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 16:54:41.170326    4071 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:41.179806    4071 main.go:141] libmachine: STDOUT: 
	I0815 16:54:41.179825    4071 main.go:141] libmachine: STDERR: 
	I0815 16:54:41.179877    4071 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2 +20000M
	I0815 16:54:41.187891    4071 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 16:54:41.187906    4071 main.go:141] libmachine: STDERR: 
	I0815 16:54:41.187917    4071 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:41.187922    4071 main.go:141] libmachine: Starting QEMU VM...
	I0815 16:54:41.187934    4071 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:54:41.187963    4071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ea:31:98:79:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:41.189639    4071 main.go:141] libmachine: STDOUT: 
	I0815 16:54:41.189671    4071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:54:41.189683    4071 client.go:171] duration metric: took 279.240375ms to LocalClient.Create
	I0815 16:54:43.191923    4071 start.go:128] duration metric: took 2.314578708s to createHost
	I0815 16:54:43.192044    4071 start.go:83] releasing machines lock for "kubernetes-upgrade-559000", held for 2.314873375s
	W0815 16:54:43.192513    4071 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-559000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-559000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:54:43.205142    4071 out.go:201] 
	W0815 16:54:43.208224    4071 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:54:43.208317    4071 out.go:270] * 
	* 
	W0815 16:54:43.210967    4071 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:54:43.219120    4071 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-559000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-559000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-559000: (3.706400333s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-559000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-559000 status --format={{.Host}}: exit status 7 (49.7255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-559000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-559000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.17033125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-559000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-559000" primary control-plane node in "kubernetes-upgrade-559000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-559000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-559000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:54:47.019687    4109 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:54:47.019819    4109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:54:47.019822    4109 out.go:358] Setting ErrFile to fd 2...
	I0815 16:54:47.019825    4109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:54:47.019949    4109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:54:47.020990    4109 out.go:352] Setting JSON to false
	I0815 16:54:47.037777    4109 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3255,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:54:47.037871    4109 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:54:47.041970    4109 out.go:177] * [kubernetes-upgrade-559000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:54:47.049831    4109 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:54:47.049871    4109 notify.go:220] Checking for updates...
	I0815 16:54:47.057902    4109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:54:47.060889    4109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:54:47.063857    4109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:54:47.066930    4109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:54:47.069856    4109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:54:47.073185    4109 config.go:182] Loaded profile config "kubernetes-upgrade-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0815 16:54:47.073431    4109 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:54:47.077863    4109 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:54:47.084897    4109 start.go:297] selected driver: qemu2
	I0815 16:54:47.084905    4109 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:54:47.084980    4109 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:54:47.087417    4109 cni.go:84] Creating CNI manager for ""
	I0815 16:54:47.087442    4109 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:54:47.087479    4109 start.go:340] cluster config:
	{Name:kubernetes-upgrade-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-559000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:54:47.091170    4109 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:54:47.096859    4109 out.go:177] * Starting "kubernetes-upgrade-559000" primary control-plane node in "kubernetes-upgrade-559000" cluster
	I0815 16:54:47.100874    4109 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:54:47.100891    4109 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:54:47.100902    4109 cache.go:56] Caching tarball of preloaded images
	I0815 16:54:47.100959    4109 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:54:47.100965    4109 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:54:47.101019    4109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kubernetes-upgrade-559000/config.json ...
	I0815 16:54:47.101457    4109 start.go:360] acquireMachinesLock for kubernetes-upgrade-559000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:54:47.101488    4109 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "kubernetes-upgrade-559000"
	I0815 16:54:47.101497    4109 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:54:47.101504    4109 fix.go:54] fixHost starting: 
	I0815 16:54:47.101615    4109 fix.go:112] recreateIfNeeded on kubernetes-upgrade-559000: state=Stopped err=<nil>
	W0815 16:54:47.101623    4109 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:54:47.109853    4109 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-559000" ...
	I0815 16:54:47.113682    4109 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:54:47.113722    4109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ea:31:98:79:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:47.115820    4109 main.go:141] libmachine: STDOUT: 
	I0815 16:54:47.115840    4109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:54:47.115869    4109 fix.go:56] duration metric: took 14.366042ms for fixHost
	I0815 16:54:47.115875    4109 start.go:83] releasing machines lock for "kubernetes-upgrade-559000", held for 14.382625ms
	W0815 16:54:47.115881    4109 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:54:47.115915    4109 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:54:47.115920    4109 start.go:729] Will try again in 5 seconds ...
	I0815 16:54:52.118029    4109 start.go:360] acquireMachinesLock for kubernetes-upgrade-559000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:54:52.118134    4109 start.go:364] duration metric: took 86.041µs to acquireMachinesLock for "kubernetes-upgrade-559000"
	I0815 16:54:52.118151    4109 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:54:52.118155    4109 fix.go:54] fixHost starting: 
	I0815 16:54:52.118307    4109 fix.go:112] recreateIfNeeded on kubernetes-upgrade-559000: state=Stopped err=<nil>
	W0815 16:54:52.118315    4109 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:54:52.125536    4109 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-559000" ...
	I0815 16:54:52.129487    4109 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:54:52.129533    4109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ea:31:98:79:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubernetes-upgrade-559000/disk.qcow2
	I0815 16:54:52.131934    4109 main.go:141] libmachine: STDOUT: 
	I0815 16:54:52.131953    4109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 16:54:52.131973    4109 fix.go:56] duration metric: took 13.817208ms for fixHost
	I0815 16:54:52.131979    4109 start.go:83] releasing machines lock for "kubernetes-upgrade-559000", held for 13.840125ms
	W0815 16:54:52.132023    4109 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-559000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-559000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 16:54:52.139480    4109 out.go:201] 
	W0815 16:54:52.142512    4109 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 16:54:52.142517    4109 out.go:270] * 
	* 
	W0815 16:54:52.142994    4109 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:54:52.153409    4109 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-559000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-559000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-559000 version --output=json: exit status 1 (28.83175ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-559000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-15 16:54:52.19093 -0700 PDT m=+2976.119261584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-559000 -n kubernetes-upgrade-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-559000 -n kubernetes-upgrade-559000: exit status 7 (29.866375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-559000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-559000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-559000
--- FAIL: TestKubernetesUpgrade (19.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.35s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19452
- KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3656445323/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19452
- KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2242575144/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (582.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332076740 start -p stopped-upgrade-889000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332076740 start -p stopped-upgrade-889000 --memory=2200 --vm-driver=qemu2 : (47.858975334s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332076740 -p stopped-upgrade-889000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.332076740 -p stopped-upgrade-889000 stop: (12.125992625s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0815 16:55:53.645379    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:58:57.026720    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.507996792s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-889000" primary control-plane node in "stopped-upgrade-889000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-889000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:55:53.335899    4145 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:55:53.336057    4145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:55:53.336061    4145 out.go:358] Setting ErrFile to fd 2...
	I0815 16:55:53.336065    4145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:55:53.336236    4145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:55:53.337583    4145 out.go:352] Setting JSON to false
	I0815 16:55:53.357447    4145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3321,"bootTime":1723762832,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:55:53.357527    4145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:55:53.362571    4145 out.go:177] * [stopped-upgrade-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:55:53.369572    4145 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:55:53.369719    4145 notify.go:220] Checking for updates...
	I0815 16:55:53.376484    4145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:55:53.379573    4145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:55:53.382545    4145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:55:53.385509    4145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:55:53.388531    4145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:55:53.390141    4145 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:55:53.393518    4145 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 16:55:53.396523    4145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:55:53.400345    4145 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:55:53.407543    4145 start.go:297] selected driver: qemu2
	I0815 16:55:53.407548    4145 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:55:53.407592    4145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:55:53.410270    4145 cni.go:84] Creating CNI manager for ""
	I0815 16:55:53.410288    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:55:53.410311    4145 start.go:340] cluster config:
	{Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:55:53.410361    4145 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:55:53.416474    4145 out.go:177] * Starting "stopped-upgrade-889000" primary control-plane node in "stopped-upgrade-889000" cluster
	I0815 16:55:53.420552    4145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 16:55:53.420568    4145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0815 16:55:53.420577    4145 cache.go:56] Caching tarball of preloaded images
	I0815 16:55:53.420634    4145 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 16:55:53.420640    4145 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0815 16:55:53.420691    4145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/config.json ...
	I0815 16:55:53.421031    4145 start.go:360] acquireMachinesLock for stopped-upgrade-889000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:55:53.421064    4145 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "stopped-upgrade-889000"
	I0815 16:55:53.421073    4145 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:55:53.421078    4145 fix.go:54] fixHost starting: 
	I0815 16:55:53.421187    4145 fix.go:112] recreateIfNeeded on stopped-upgrade-889000: state=Stopped err=<nil>
	W0815 16:55:53.421196    4145 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:55:53.429436    4145 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-889000" ...
	I0815 16:55:53.435482    4145 qemu.go:418] Using hvf for hardware acceleration
	I0815 16:55:53.435554    4145 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50447-:22,hostfwd=tcp::50448-:2376,hostname=stopped-upgrade-889000 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/disk.qcow2
	I0815 16:55:53.480560    4145 main.go:141] libmachine: STDOUT: 
	I0815 16:55:53.480593    4145 main.go:141] libmachine: STDERR: 
	I0815 16:55:53.480599    4145 main.go:141] libmachine: Waiting for VM to start (ssh -p 50447 docker@127.0.0.1)...
	I0815 16:56:13.861566    4145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/config.json ...
	I0815 16:56:13.862252    4145 machine.go:93] provisionDockerMachine start ...
	I0815 16:56:13.862444    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:13.862928    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:13.862942    4145 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:56:13.954681    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:56:13.954720    4145 buildroot.go:166] provisioning hostname "stopped-upgrade-889000"
	I0815 16:56:13.954819    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:13.955078    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:13.955094    4145 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-889000 && echo "stopped-upgrade-889000" | sudo tee /etc/hostname
	I0815 16:56:14.039805    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-889000
	
	I0815 16:56:14.039874    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.040051    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.040064    4145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-889000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-889000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-889000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:56:14.116883    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:56:14.116897    4145 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-964/.minikube}
	I0815 16:56:14.116913    4145 buildroot.go:174] setting up certificates
	I0815 16:56:14.116921    4145 provision.go:84] configureAuth start
	I0815 16:56:14.116926    4145 provision.go:143] copyHostCerts
	I0815 16:56:14.117009    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem, removing ...
	I0815 16:56:14.117016    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem
	I0815 16:56:14.117132    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/ca.pem (1082 bytes)
	I0815 16:56:14.117341    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem, removing ...
	I0815 16:56:14.117346    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem
	I0815 16:56:14.117410    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/cert.pem (1123 bytes)
	I0815 16:56:14.117529    4145 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem, removing ...
	I0815 16:56:14.117534    4145 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem
	I0815 16:56:14.117583    4145 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-964/.minikube/key.pem (1679 bytes)
	I0815 16:56:14.117695    4145 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-889000 san=[127.0.0.1 localhost minikube stopped-upgrade-889000]
	I0815 16:56:14.330948    4145 provision.go:177] copyRemoteCerts
	I0815 16:56:14.331001    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:56:14.331013    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 16:56:14.368303    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:56:14.375569    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 16:56:14.382766    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:56:14.389518    4145 provision.go:87] duration metric: took 272.585166ms to configureAuth
	I0815 16:56:14.389529    4145 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:56:14.389646    4145 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 16:56:14.389697    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.389798    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.389802    4145 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:56:14.462307    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:56:14.462317    4145 buildroot.go:70] root file system type: tmpfs
	I0815 16:56:14.462373    4145 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:56:14.462424    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.462555    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.462591    4145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:56:14.534503    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:56:14.534569    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.534701    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.534709    4145 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:56:14.926222    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:56:14.926234    4145 machine.go:96] duration metric: took 1.063960458s to provisionDockerMachine
	I0815 16:56:14.926241    4145 start.go:293] postStartSetup for "stopped-upgrade-889000" (driver="qemu2")
	I0815 16:56:14.926248    4145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:56:14.926315    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:56:14.926325    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 16:56:14.963662    4145 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:56:14.965010    4145 info.go:137] Remote host: Buildroot 2021.02.12
	I0815 16:56:14.965018    4145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/addons for local assets ...
	I0815 16:56:14.965095    4145 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-964/.minikube/files for local assets ...
	I0815 16:56:14.965189    4145 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem -> 14462.pem in /etc/ssl/certs
	I0815 16:56:14.965290    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:56:14.968020    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:56:14.975193    4145 start.go:296] duration metric: took 48.946125ms for postStartSetup
	I0815 16:56:14.975208    4145 fix.go:56] duration metric: took 21.553893708s for fixHost
	I0815 16:56:14.975253    4145 main.go:141] libmachine: Using SSH client type: native
	I0815 16:56:14.975362    4145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051305a0] 0x105132e00 <nil>  [] 0s} localhost 50447 <nil> <nil>}
	I0815 16:56:14.975367    4145 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:56:15.045079    4145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723766174.752727629
	
	I0815 16:56:15.045087    4145 fix.go:216] guest clock: 1723766174.752727629
	I0815 16:56:15.045091    4145 fix.go:229] Guest: 2024-08-15 16:56:14.752727629 -0700 PDT Remote: 2024-08-15 16:56:14.975209 -0700 PDT m=+21.671272293 (delta=-222.481371ms)
	I0815 16:56:15.045103    4145 fix.go:200] guest clock delta is within tolerance: -222.481371ms
	I0815 16:56:15.045106    4145 start.go:83] releasing machines lock for "stopped-upgrade-889000", held for 21.623799583s
	I0815 16:56:15.045165    4145 ssh_runner.go:195] Run: cat /version.json
	I0815 16:56:15.045180    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 16:56:15.045165    4145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:56:15.045206    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	W0815 16:56:15.045840    4145 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50447: connect: connection refused
	I0815 16:56:15.045865    4145 retry.go:31] will retry after 334.022404ms: dial tcp [::1]:50447: connect: connection refused
	W0815 16:56:15.422762    4145 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0815 16:56:15.422835    4145 ssh_runner.go:195] Run: systemctl --version
	I0815 16:56:15.425508    4145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:56:15.427631    4145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:56:15.427690    4145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0815 16:56:15.431483    4145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0815 16:56:15.438057    4145 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:56:15.438081    4145 start.go:495] detecting cgroup driver to use...
	I0815 16:56:15.438169    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:56:15.445902    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0815 16:56:15.449409    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:56:15.452467    4145 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:56:15.452522    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:56:15.455940    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:56:15.458906    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:56:15.462125    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:56:15.464929    4145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:56:15.467851    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:56:15.470901    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:56:15.473971    4145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:56:15.476804    4145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:56:15.479998    4145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:56:15.483159    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:15.540876    4145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:56:15.549735    4145 start.go:495] detecting cgroup driver to use...
	I0815 16:56:15.549804    4145 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:56:15.556936    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:56:15.562070    4145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:56:15.571080    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:56:15.575223    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:56:15.579608    4145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:56:15.619353    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:56:15.624369    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:56:15.629595    4145 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:56:15.630847    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:56:15.633473    4145 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0815 16:56:15.638157    4145 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:56:15.696517    4145 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:56:15.757093    4145 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:56:15.757168    4145 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:56:15.762289    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:15.824648    4145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:56:16.970888    4145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146205875s)
	I0815 16:56:16.970946    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:56:16.976029    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:56:16.980956    4145 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:56:17.040810    4145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:56:17.118584    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:17.176289    4145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:56:17.182898    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:56:17.187462    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:17.251966    4145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:56:17.289249    4145 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:56:17.289341    4145 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:56:17.292860    4145 start.go:563] Will wait 60s for crictl version
	I0815 16:56:17.292923    4145 ssh_runner.go:195] Run: which crictl
	I0815 16:56:17.294414    4145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:56:17.308766    4145 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0815 16:56:17.308846    4145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:56:17.325352    4145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:56:17.346532    4145 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0815 16:56:17.346597    4145 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0815 16:56:17.347896    4145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:56:17.351677    4145 kubeadm.go:883] updating cluster {Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0815 16:56:17.351720    4145 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 16:56:17.351764    4145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:56:17.364300    4145 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 16:56:17.364311    4145 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 16:56:17.364360    4145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 16:56:17.367991    4145 ssh_runner.go:195] Run: which lz4
	I0815 16:56:17.369423    4145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 16:56:17.370650    4145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 16:56:17.370659    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0815 16:56:18.304203    4145 docker.go:649] duration metric: took 934.799333ms to copy over tarball
	I0815 16:56:18.304258    4145 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 16:56:19.482834    4145 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.178548166s)
	I0815 16:56:19.482848    4145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 16:56:19.498048    4145 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 16:56:19.500856    4145 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0815 16:56:19.505627    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:19.582113    4145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:56:21.238752    4145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.65659925s)
	I0815 16:56:21.238847    4145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:56:21.249551    4145 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 16:56:21.249560    4145 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 16:56:21.249565    4145 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 16:56:21.254771    4145 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.257185    4145 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:21.259200    4145 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.259797    4145 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.261856    4145 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:21.261943    4145 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 16:56:21.263140    4145 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.263277    4145 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.264351    4145 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.264476    4145 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 16:56:21.265131    4145 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.265420    4145 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.266579    4145 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.267025    4145 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.267828    4145 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.268506    4145 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.615274    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.627299    4145 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0815 16:56:21.627335    4145 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.627384    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 16:56:21.630802    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0815 16:56:21.634588    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.645047    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0815 16:56:21.645078    4145 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0815 16:56:21.645095    4145 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0815 16:56:21.645138    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0815 16:56:21.654063    4145 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0815 16:56:21.654083    4145 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.654134    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0815 16:56:21.665240    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0815 16:56:21.665358    4145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0815 16:56:21.668962    4145 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 16:56:21.669076    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.670904    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0815 16:56:21.670926    4145 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0815 16:56:21.670938    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0815 16:56:21.677862    4145 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 16:56:21.677877    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0815 16:56:21.688702    4145 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0815 16:56:21.688731    4145 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.688786    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 16:56:21.703719    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.716625    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 16:56:21.716747    4145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 16:56:21.716790    4145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0815 16:56:21.718247    4145 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0815 16:56:21.718263    4145 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.718302    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 16:56:21.719210    4145 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0815 16:56:21.719228    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0815 16:56:21.722169    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.740466    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0815 16:56:21.760487    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.763717    4145 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0815 16:56:21.763737    4145 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.763785    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 16:56:21.780050    4145 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 16:56:21.780066    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0815 16:56:21.789469    4145 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0815 16:56:21.789491    4145 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.789542    4145 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 16:56:21.804303    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0815 16:56:21.830427    4145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0815 16:56:21.830456    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0815 16:56:22.175121    4145 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 16:56:22.175629    4145 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:22.213238    4145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0815 16:56:22.213282    4145 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:22.213380    4145 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 16:56:22.239280    4145 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 16:56:22.239451    4145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 16:56:22.241758    4145 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0815 16:56:22.241778    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0815 16:56:22.275466    4145 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 16:56:22.275480    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0815 16:56:22.515040    4145 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 16:56:22.515081    4145 cache_images.go:92] duration metric: took 1.265494667s to LoadCachedImages
	W0815 16:56:22.515116    4145 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0815 16:56:22.515122    4145 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0815 16:56:22.515170    4145 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-889000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:56:22.515238    4145 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:56:22.529309    4145 cni.go:84] Creating CNI manager for ""
	I0815 16:56:22.529321    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:56:22.529326    4145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:56:22.529335    4145 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-889000 NodeName:stopped-upgrade-889000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:56:22.529410    4145 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-889000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:56:22.529466    4145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0815 16:56:22.532401    4145 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:56:22.532430    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 16:56:22.535469    4145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0815 16:56:22.540483    4145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:56:22.545527    4145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0815 16:56:22.550561    4145 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0815 16:56:22.551849    4145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:56:22.555767    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:56:22.621917    4145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:56:22.628315    4145 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000 for IP: 10.0.2.15
	I0815 16:56:22.628323    4145 certs.go:194] generating shared ca certs ...
	I0815 16:56:22.628335    4145 certs.go:226] acquiring lock for ca certs: {Name:mk1fa67494d9857cf8e0d98ec65576a15d2cd3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:22.628487    4145 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key
	I0815 16:56:22.628524    4145 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key
	I0815 16:56:22.628529    4145 certs.go:256] generating profile certs ...
	I0815 16:56:22.628593    4145 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.key
	I0815 16:56:22.628614    4145 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b
	I0815 16:56:22.628625    4145 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0815 16:56:22.867768    4145 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b ...
	I0815 16:56:22.867786    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b: {Name:mk67aa5da0e72bcf848236e37ade401b9d14c0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:22.868404    4145 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b ...
	I0815 16:56:22.868412    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b: {Name:mk546f651669edc022ebf3798e841d2a806750d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:22.868545    4145 certs.go:381] copying /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt.73227b1b -> /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt
	I0815 16:56:22.868709    4145 certs.go:385] copying /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key.73227b1b -> /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key
	I0815 16:56:22.868864    4145 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/proxy-client.key
	I0815 16:56:22.869010    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem (1338 bytes)
	W0815 16:56:22.869033    4145 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446_empty.pem, impossibly tiny 0 bytes
	I0815 16:56:22.869039    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 16:56:22.869082    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:56:22.869109    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:56:22.869134    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/certs/key.pem (1679 bytes)
	I0815 16:56:22.869187    4145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem (1708 bytes)
	I0815 16:56:22.869547    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:56:22.876954    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 16:56:22.884136    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:56:22.891461    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 16:56:22.898475    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 16:56:22.905274    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:56:22.911953    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:56:22.919198    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:56:22.926443    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:56:22.932951    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/certs/1446.pem --> /usr/share/ca-certificates/1446.pem (1338 bytes)
	I0815 16:56:22.939791    4145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/ssl/certs/14462.pem --> /usr/share/ca-certificates/14462.pem (1708 bytes)
	I0815 16:56:22.946961    4145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:56:22.951992    4145 ssh_runner.go:195] Run: openssl version
	I0815 16:56:22.953944    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:56:22.956707    4145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:56:22.958155    4145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:56:22.958178    4145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:56:22.959871    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:56:22.963256    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1446.pem && ln -fs /usr/share/ca-certificates/1446.pem /etc/ssl/certs/1446.pem"
	I0815 16:56:22.966023    4145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1446.pem
	I0815 16:56:22.967404    4145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:13 /usr/share/ca-certificates/1446.pem
	I0815 16:56:22.967422    4145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1446.pem
	I0815 16:56:22.969365    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1446.pem /etc/ssl/certs/51391683.0"
	I0815 16:56:22.972536    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14462.pem && ln -fs /usr/share/ca-certificates/14462.pem /etc/ssl/certs/14462.pem"
	I0815 16:56:22.975946    4145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14462.pem
	I0815 16:56:22.977464    4145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:13 /usr/share/ca-certificates/14462.pem
	I0815 16:56:22.977482    4145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14462.pem
	I0815 16:56:22.979200    4145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14462.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:56:22.982194    4145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:56:22.983619    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:56:22.985567    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:56:22.987307    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:56:22.989557    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:56:22.991312    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:56:22.993202    4145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:56:22.995051    4145 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50482 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 16:56:22.995118    4145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:56:23.005701    4145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:56:23.008664    4145 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:56:23.008670    4145 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:56:23.008694    4145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:56:23.012598    4145 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:56:23.012883    4145 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-889000" does not appear in /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:56:23.012978    4145 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-964/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-889000" cluster setting kubeconfig missing "stopped-upgrade-889000" context setting]
	I0815 16:56:23.013154    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:23.013622    4145 kapi.go:59] client config for stopped-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:56:23.013947    4145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:56:23.016633    4145 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-889000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0815 16:56:23.016638    4145 kubeadm.go:1160] stopping kube-system containers ...
	I0815 16:56:23.016680    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:56:23.027332    4145 docker.go:483] Stopping containers: [83b99d5f50de 70b7213c6b52 0a558c6ba534 8a3ae34e9cb3 b3f17efb3bfe 88d6c111039f 659d72bec753 b1d53cd33942 d5d0b7ba9f28]
	I0815 16:56:23.027392    4145 ssh_runner.go:195] Run: docker stop 83b99d5f50de 70b7213c6b52 0a558c6ba534 8a3ae34e9cb3 b3f17efb3bfe 88d6c111039f 659d72bec753 b1d53cd33942 d5d0b7ba9f28
	I0815 16:56:23.038355    4145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 16:56:23.043751    4145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 16:56:23.046855    4145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 16:56:23.046861    4145 kubeadm.go:157] found existing configuration files:
	
	I0815 16:56:23.046889    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf
	I0815 16:56:23.049122    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 16:56:23.049146    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 16:56:23.052162    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf
	I0815 16:56:23.055067    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 16:56:23.055088    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 16:56:23.057698    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf
	I0815 16:56:23.060134    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 16:56:23.060157    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 16:56:23.063235    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf
	I0815 16:56:23.065602    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 16:56:23.065625    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 16:56:23.068128    4145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 16:56:23.071092    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.094290    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.742187    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.854059    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.886530    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 16:56:23.908124    4145 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:56:23.908200    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:24.410266    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:24.910274    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:56:24.914361    4145 api_server.go:72] duration metric: took 1.006227917s to wait for apiserver process to appear ...
	I0815 16:56:24.914372    4145 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:56:24.914384    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:29.916590    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:29.916617    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:34.917331    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:34.917358    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:39.917842    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:39.917867    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:44.918476    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:44.918560    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:49.919433    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:49.919453    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:54.920547    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:54.920636    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:56:59.922381    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:56:59.922404    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:04.924085    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:04.924114    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:09.926256    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:09.926291    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:14.928564    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:14.928589    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:19.930941    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:19.931022    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:24.933360    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:24.933604    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:24.954229    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:24.954330    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:24.970638    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:24.970724    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:24.983872    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:24.983939    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:24.994785    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:24.994858    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:25.004865    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:25.004939    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:25.015276    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:25.015361    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:25.025358    4145 logs.go:276] 0 containers: []
	W0815 16:57:25.025370    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:25.025427    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:25.036759    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:25.036776    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:25.036782    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:25.051060    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:25.051070    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:25.062047    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:25.062058    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:25.074513    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:25.074525    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:25.086774    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:25.086789    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:25.104871    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:25.104883    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:25.120683    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:25.120699    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:25.132430    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:25.132447    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:25.169939    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:25.169948    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:25.174389    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:25.174400    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:25.189406    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:25.189418    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:25.215349    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:25.215357    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:25.228101    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:25.228115    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:25.309247    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:25.309261    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:25.349186    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:25.349199    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:25.365359    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:25.365370    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:25.379353    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:25.379366    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:27.893939    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:32.896366    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:32.896738    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:32.931660    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:32.931802    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:32.951369    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:32.951475    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:32.965835    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:32.965912    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:32.978271    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:32.978364    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:32.989468    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:32.989547    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:33.000244    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:33.000316    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:33.010079    4145 logs.go:276] 0 containers: []
	W0815 16:57:33.010089    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:33.010145    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:33.020497    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:33.020515    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:33.020520    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:33.024493    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:33.024503    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:33.038358    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:33.038369    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:33.060427    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:33.060441    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:33.077335    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:33.077349    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:33.088917    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:33.088929    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:33.101198    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:33.101211    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:33.139950    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:33.139959    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:33.175389    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:33.175399    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:33.187712    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:33.187725    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:33.205775    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:33.205785    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:33.217066    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:33.217078    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:33.229231    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:33.229243    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:33.242991    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:33.243001    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:33.280606    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:33.280617    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:33.298116    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:33.298128    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:33.309929    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:33.309952    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:35.837739    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:40.840365    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:40.840720    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:40.878579    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:40.878718    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:40.903474    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:40.903572    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:40.918368    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:40.918456    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:40.935634    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:40.935726    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:40.947585    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:40.947654    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:40.958168    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:40.958226    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:40.968536    4145 logs.go:276] 0 containers: []
	W0815 16:57:40.968548    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:40.968612    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:40.979270    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:40.979289    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:40.979295    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:40.991575    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:40.991586    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:41.010309    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:41.010320    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:41.033806    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:41.033816    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:41.070745    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:41.070760    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:41.109202    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:41.109217    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:41.125100    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:41.125113    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:41.140643    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:41.140656    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:41.152194    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:41.152204    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:41.165634    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:41.165646    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:41.185503    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:41.185517    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:41.197130    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:41.197141    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:41.214473    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:41.214483    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:41.219645    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:41.219655    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:41.254050    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:41.254063    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:41.268068    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:41.268080    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:41.281780    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:41.281791    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:43.794031    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:48.796454    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:48.796602    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:48.815217    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:48.815309    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:48.829786    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:48.829873    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:48.841501    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:48.841573    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:48.851827    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:48.851902    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:48.862307    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:48.862369    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:48.873167    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:48.873225    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:48.883520    4145 logs.go:276] 0 containers: []
	W0815 16:57:48.883532    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:48.883590    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:48.894127    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:48.894148    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:48.894156    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:48.934609    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:48.934619    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:48.948737    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:48.948751    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:48.959634    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:48.959646    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:48.973589    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:48.973599    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:48.989229    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:48.989240    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:49.012174    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:49.012183    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:49.026050    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:49.026061    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:49.040867    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:49.040884    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:49.052556    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:49.052567    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:49.063708    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:49.063724    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:49.075301    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:49.075315    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:49.079732    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:49.079742    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:49.117050    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:49.117064    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:49.128838    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:49.128851    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:49.147291    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:49.147302    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:49.182242    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:49.182257    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:51.699212    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:57:56.701566    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:57:56.701822    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:57:56.725102    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:57:56.725206    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:57:56.740534    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:57:56.740608    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:57:56.752884    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:57:56.752952    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:57:56.763506    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:57:56.763572    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:57:56.773484    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:57:56.773548    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:57:56.784441    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:57:56.784498    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:57:56.794233    4145 logs.go:276] 0 containers: []
	W0815 16:57:56.794245    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:57:56.794295    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:57:56.804913    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:57:56.804930    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:57:56.804935    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:57:56.844319    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:57:56.844329    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:57:56.858177    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:57:56.858190    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:57:56.873610    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:57:56.873623    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:57:56.885209    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:57:56.885221    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:57:56.920519    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:57:56.920530    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:57:56.958546    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:57:56.958556    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:57:56.970442    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:57:56.970454    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:57:56.988377    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:57:56.988391    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:57:57.002840    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:57:57.002853    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:57:57.020377    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:57:57.020389    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:57:57.039829    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:57:57.039839    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:57:57.051536    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:57:57.051550    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:57:57.065498    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:57:57.065514    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:57:57.070053    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:57:57.070060    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:57:57.083729    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:57:57.083741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:57:57.097502    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:57:57.097516    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:57:59.626782    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:04.629138    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:04.629271    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:04.643758    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:04.643838    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:04.655635    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:04.655702    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:04.665833    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:04.665899    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:04.676661    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:04.676734    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:04.687811    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:04.687873    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:04.698555    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:04.698622    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:04.708521    4145 logs.go:276] 0 containers: []
	W0815 16:58:04.708533    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:04.708589    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:04.718696    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:04.718713    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:04.718720    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:04.722652    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:04.722662    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:04.757688    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:04.757700    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:04.769513    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:04.769524    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:04.786588    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:04.786599    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:04.804782    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:04.804794    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:04.843489    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:04.843497    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:04.858730    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:04.858741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:04.870476    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:04.870488    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:04.882474    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:04.882485    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:04.894333    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:04.894346    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:04.908822    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:04.908833    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:04.950545    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:04.950559    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:04.964691    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:04.964705    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:04.975988    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:04.976003    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:04.999606    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:04.999615    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:05.013549    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:05.013563    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:07.527138    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:12.529590    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:12.529771    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:12.544424    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:12.544504    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:12.556495    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:12.556569    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:12.567364    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:12.567435    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:12.577970    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:12.578038    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:12.588207    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:12.588285    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:12.599027    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:12.599099    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:12.609456    4145 logs.go:276] 0 containers: []
	W0815 16:58:12.609475    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:12.609531    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:12.619978    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:12.619994    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:12.619999    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:12.631248    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:12.631260    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:12.642107    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:12.642120    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:12.666778    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:12.666788    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:12.680493    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:12.680504    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:12.694736    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:12.694746    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:12.712886    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:12.712896    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:12.724524    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:12.724538    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:12.739509    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:12.739519    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:12.756570    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:12.756582    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:12.768179    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:12.768190    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:12.772853    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:12.772859    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:12.811086    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:12.811099    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:12.822334    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:12.822346    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:12.833802    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:12.833812    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:12.871830    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:12.871843    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:12.906994    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:12.907007    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:15.422909    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:20.424569    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:20.424782    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:20.442448    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:20.442539    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:20.457350    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:20.457419    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:20.477221    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:20.477283    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:20.491853    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:20.491933    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:20.503683    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:20.503760    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:20.514596    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:20.514654    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:20.525115    4145 logs.go:276] 0 containers: []
	W0815 16:58:20.525128    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:20.525175    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:20.535746    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:20.535764    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:20.535770    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:20.570876    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:20.570888    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:20.582085    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:20.582098    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:20.605082    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:20.605095    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:20.623900    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:20.623914    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:20.639337    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:20.639350    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:20.650611    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:20.650623    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:20.664933    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:20.664947    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:20.678907    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:20.678918    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:20.690649    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:20.690659    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:20.701954    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:20.701964    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:20.713085    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:20.713096    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:20.725255    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:20.725265    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:20.763923    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:20.763934    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:20.768253    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:20.768260    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:20.782463    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:20.782474    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:20.821953    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:20.821964    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:23.348290    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:28.350652    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:28.350834    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:28.369713    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:28.369805    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:28.384534    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:28.384614    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:28.396822    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:28.396891    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:28.411912    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:28.411982    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:28.422808    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:28.422883    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:28.435928    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:28.436000    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:28.445975    4145 logs.go:276] 0 containers: []
	W0815 16:58:28.445987    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:28.446045    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:28.457056    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:28.457073    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:28.457079    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:28.461369    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:28.461377    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:28.475039    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:28.475049    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:28.486130    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:28.486144    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:28.520676    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:28.520687    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:28.535204    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:28.535218    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:28.575582    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:28.575597    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:28.590052    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:28.590063    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:28.604424    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:28.604436    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:28.619710    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:28.619720    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:28.631147    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:28.631157    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:28.654490    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:28.654498    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:28.690543    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:28.690551    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:28.704947    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:28.704959    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:28.722617    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:28.722626    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:28.734707    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:28.734722    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:28.748212    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:28.748224    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:31.261987    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:36.264556    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:36.264970    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:36.306732    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:36.306861    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:36.326831    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:36.326932    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:36.341643    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:36.341725    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:36.354357    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:36.354438    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:36.365560    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:36.365622    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:36.376228    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:36.376301    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:36.387357    4145 logs.go:276] 0 containers: []
	W0815 16:58:36.387368    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:36.387430    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:36.398203    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:36.398222    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:36.398228    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:36.411333    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:36.411347    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:36.427197    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:36.427208    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:36.450481    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:36.450499    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:36.487330    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:36.487342    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:36.522544    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:36.522558    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:36.534181    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:36.534193    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:36.545573    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:36.545586    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:36.560683    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:36.560695    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:36.575616    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:36.575635    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:36.589562    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:36.589572    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:36.605083    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:36.605097    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:36.618925    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:36.618937    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:36.636385    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:36.636398    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:36.662753    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:36.662768    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:36.673962    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:36.673972    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:36.678168    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:36.678175    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:39.217690    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:44.220184    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:44.220453    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:44.245514    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:44.245631    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:44.263973    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:44.264057    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:44.281092    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:44.281159    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:44.292222    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:44.292293    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:44.305875    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:44.305941    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:44.317637    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:44.317719    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:44.328196    4145 logs.go:276] 0 containers: []
	W0815 16:58:44.328210    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:44.328269    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:44.338547    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:44.338567    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:44.338573    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:44.349420    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:44.349431    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:44.383677    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:44.383687    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:44.397951    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:44.397965    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:44.412564    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:44.412574    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:44.424635    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:44.424650    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:44.436983    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:44.436995    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:44.449604    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:44.449616    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:44.487117    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:44.487131    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:44.510135    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:44.510143    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:44.548749    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:44.548762    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:44.562719    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:44.562731    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:44.576793    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:44.576803    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:44.589132    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:44.589145    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:44.593885    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:44.593893    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:44.613104    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:44.613115    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:44.628461    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:44.628475    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:47.149794    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:58:52.152307    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:58:52.152788    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:58:52.199986    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:58:52.200120    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:58:52.219308    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:58:52.219397    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:58:52.233229    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:58:52.233300    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:58:52.245935    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:58:52.246024    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:58:52.256834    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:58:52.256907    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:58:52.267494    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:58:52.267563    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:58:52.279372    4145 logs.go:276] 0 containers: []
	W0815 16:58:52.279384    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:58:52.279448    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:58:52.290136    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:58:52.290155    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:58:52.290162    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:58:52.310255    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:58:52.310266    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:58:52.329948    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:58:52.329963    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:58:52.342387    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:58:52.342399    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:58:52.354435    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:58:52.354448    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:58:52.372703    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:58:52.372717    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:58:52.398533    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:58:52.398547    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:58:52.436317    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:58:52.436343    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:58:52.452534    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:58:52.452546    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:58:52.491573    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:58:52.491589    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:58:52.505905    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:58:52.505916    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:58:52.519693    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:58:52.519702    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:58:52.536627    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:58:52.536642    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:58:52.549809    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:58:52.549823    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:58:52.588418    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:58:52.588431    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:58:52.592815    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:58:52.592822    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:58:52.604299    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:58:52.604313    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:58:55.118169    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:00.120884    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:00.121196    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:00.144813    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:00.144938    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:00.160447    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:00.160524    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:00.173011    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:00.173087    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:00.184146    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:00.184221    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:00.198581    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:00.198650    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:00.209199    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:00.209265    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:00.219866    4145 logs.go:276] 0 containers: []
	W0815 16:59:00.219875    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:00.219929    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:00.231584    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:00.231602    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:00.231607    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:00.267732    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:00.267741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:00.281794    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:00.281805    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:00.307816    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:00.307826    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:00.319886    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:00.319899    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:00.331733    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:00.331747    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:00.345762    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:00.345774    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:00.384160    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:00.384173    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:00.396198    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:00.396212    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:00.411351    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:00.411361    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:00.423356    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:00.423370    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:00.427811    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:00.427819    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:00.446131    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:00.446144    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:00.457440    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:00.457453    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:00.472136    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:00.472152    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:00.514328    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:00.514341    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:00.530891    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:00.530900    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:03.057319    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:08.059701    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:08.059902    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:08.083407    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:08.083488    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:08.094552    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:08.094623    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:08.105027    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:08.105099    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:08.115176    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:08.115254    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:08.150952    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:08.151026    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:08.172979    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:08.173045    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:08.182979    4145 logs.go:276] 0 containers: []
	W0815 16:59:08.182989    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:08.183042    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:08.193864    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:08.193884    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:08.193889    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:08.205803    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:08.205815    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:08.217582    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:08.217594    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:08.231786    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:08.231796    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:08.243438    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:08.243448    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:08.282558    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:08.282576    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:08.286764    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:08.286770    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:08.327908    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:08.327922    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:08.343542    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:08.343553    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:08.363058    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:08.363070    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:08.375655    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:08.375667    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:08.413119    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:08.413132    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:08.429719    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:08.429730    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:08.443016    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:08.443029    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:08.468762    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:08.468775    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:08.484597    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:08.484609    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:08.496821    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:08.496834    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:11.013517    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:16.014145    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:16.014338    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:16.029874    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:16.029960    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:16.042880    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:16.042956    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:16.054107    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:16.054182    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:16.064230    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:16.064301    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:16.074677    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:16.074745    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:16.085563    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:16.085634    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:16.096204    4145 logs.go:276] 0 containers: []
	W0815 16:59:16.096214    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:16.096273    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:16.106890    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:16.106907    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:16.106912    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:16.120865    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:16.120878    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:16.135010    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:16.135024    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:16.146430    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:16.146443    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:16.182541    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:16.182548    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:16.197498    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:16.197511    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:16.209986    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:16.209996    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:16.253616    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:16.253632    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:16.272960    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:16.272972    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:16.285928    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:16.285943    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:16.307861    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:16.307880    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:16.327186    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:16.327202    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:16.340023    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:16.340037    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:16.378045    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:16.378059    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:16.390695    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:16.390707    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:16.403059    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:16.403069    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:16.426995    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:16.427010    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:18.932744    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:23.935162    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:23.935338    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:23.949703    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:23.949789    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:23.962130    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:23.962206    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:23.976055    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:23.976142    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:23.986921    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:23.986992    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:23.997711    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:23.997778    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:24.008123    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:24.008193    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:24.018488    4145 logs.go:276] 0 containers: []
	W0815 16:59:24.018501    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:24.018564    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:24.029455    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:24.029471    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:24.029476    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:24.043660    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:24.043673    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:24.058686    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:24.058698    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:24.071408    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:24.071417    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:24.083859    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:24.083871    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:24.122741    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:24.122762    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:24.161328    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:24.161345    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:24.179948    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:24.179964    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:24.192351    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:24.192365    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:24.218421    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:24.218434    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:24.233067    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:24.233082    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:24.256787    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:24.256798    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:24.275994    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:24.276007    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:24.291218    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:24.291231    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:24.295575    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:24.295588    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:24.337891    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:24.337903    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:24.356692    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:24.356704    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:26.878486    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:31.880901    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:31.881314    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:31.917303    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:31.917425    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:31.941939    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:31.942021    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:31.956160    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:31.956238    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:31.968177    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:31.968263    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:31.979389    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:31.979463    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:31.991396    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:31.991489    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:32.002420    4145 logs.go:276] 0 containers: []
	W0815 16:59:32.002432    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:32.002493    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:32.013566    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:32.013586    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:32.013592    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:32.025867    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:32.025878    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:32.038732    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:32.038746    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:32.052029    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:32.052055    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:32.093212    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:32.093233    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:32.112242    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:32.112259    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:32.136673    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:32.136680    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:32.162258    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:32.162268    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:32.166676    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:32.166683    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:32.205521    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:32.205535    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:32.221637    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:32.221654    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:32.234046    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:32.234060    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:32.245759    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:32.245774    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:32.258305    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:32.258317    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:32.275219    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:32.275231    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:32.293938    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:32.293950    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:32.333891    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:32.333904    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:34.850953    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:39.851597    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:39.851700    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:39.871168    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:39.871256    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:39.890346    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:39.890453    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:39.908624    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:39.908693    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:39.920119    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:39.920152    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:39.931696    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:39.931764    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:39.943610    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:39.943690    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:39.959817    4145 logs.go:276] 0 containers: []
	W0815 16:59:39.959831    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:39.959894    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:39.976576    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:39.976598    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:39.976604    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:39.981584    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:39.981597    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:40.019589    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:40.019602    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:40.034833    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:40.034844    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:40.076505    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:40.076528    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:40.092994    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:40.093006    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:40.104847    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:40.104861    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:40.128370    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:40.128389    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:40.143177    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:40.143193    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:40.181844    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:40.181857    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:40.201745    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:40.201765    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:40.214775    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:40.214784    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:40.230681    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:40.230693    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:40.242780    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:40.242795    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:40.254547    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:40.254562    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:40.265442    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:40.265454    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:40.277154    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:40.277169    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:42.802255    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:47.804731    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:47.804799    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:47.820508    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:47.820583    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:47.832425    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:47.832498    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:47.844031    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:47.844105    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:47.855301    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:47.855374    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:47.866511    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:47.866581    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:47.877643    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:47.877713    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:47.888767    4145 logs.go:276] 0 containers: []
	W0815 16:59:47.888780    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:47.888838    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:47.900264    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:47.900284    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:47.900290    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:47.915917    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:47.915929    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:47.932485    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:47.932499    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:47.972930    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:47.972942    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:48.011031    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:48.011044    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:48.056509    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:48.056522    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:48.071154    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:48.071165    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:48.090513    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:48.090525    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:48.113178    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:48.113191    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:48.125767    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:48.125784    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:48.149060    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:48.149078    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:48.153725    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:48.153733    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:48.164997    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:48.165012    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:48.176723    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:48.176735    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:48.188598    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:48.188610    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:48.206792    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:48.206804    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:48.221230    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:48.221244    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:50.744764    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 16:59:55.745943    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:59:55.746034    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 16:59:55.758194    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 16:59:55.758266    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 16:59:55.769445    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 16:59:55.769519    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 16:59:55.786009    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 16:59:55.786080    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 16:59:55.801879    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 16:59:55.801955    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 16:59:55.813302    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 16:59:55.813380    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 16:59:55.824546    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 16:59:55.824607    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 16:59:55.835745    4145 logs.go:276] 0 containers: []
	W0815 16:59:55.835760    4145 logs.go:278] No container was found matching "kindnet"
	I0815 16:59:55.835818    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 16:59:55.847144    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 16:59:55.847175    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 16:59:55.847181    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 16:59:55.867339    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 16:59:55.867351    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 16:59:55.882541    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 16:59:55.882559    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 16:59:55.887218    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 16:59:55.887227    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 16:59:55.902319    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 16:59:55.902331    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 16:59:55.920963    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 16:59:55.920978    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 16:59:55.933804    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 16:59:55.933815    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 16:59:55.973180    4145 logs.go:123] Gathering logs for container status ...
	I0815 16:59:55.973194    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 16:59:55.985478    4145 logs.go:123] Gathering logs for Docker ...
	I0815 16:59:55.985491    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 16:59:56.009488    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 16:59:56.009506    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 16:59:56.053704    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 16:59:56.053716    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 16:59:56.064465    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 16:59:56.064476    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 16:59:56.084974    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 16:59:56.084985    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 16:59:56.096504    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 16:59:56.096515    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 16:59:56.132177    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 16:59:56.132194    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 16:59:56.147116    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 16:59:56.147130    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 16:59:56.158605    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 16:59:56.158618    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 16:59:58.671647    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:03.673959    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:03.674066    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:03.685968    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 17:00:03.686044    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:03.696955    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 17:00:03.697023    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:03.707670    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 17:00:03.707733    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:03.719342    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 17:00:03.719438    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:03.731356    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 17:00:03.731433    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:03.742852    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 17:00:03.742932    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:03.754357    4145 logs.go:276] 0 containers: []
	W0815 17:00:03.754369    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:03.754434    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:03.765815    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 17:00:03.765854    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 17:00:03.765862    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 17:00:03.781607    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:03.781617    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:03.785997    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:03.786008    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:03.828322    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 17:00:03.828334    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 17:00:03.844999    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:03.845012    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:03.885517    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 17:00:03.885531    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 17:00:03.899347    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 17:00:03.899359    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 17:00:03.911570    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 17:00:03.911580    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 17:00:03.924334    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:03.924346    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:03.947472    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 17:00:03.947485    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 17:00:03.985907    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 17:00:03.985921    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 17:00:03.997152    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 17:00:03.997165    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 17:00:04.012000    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 17:00:04.012009    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 17:00:04.029683    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 17:00:04.029699    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 17:00:04.043518    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 17:00:04.043530    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 17:00:04.054675    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 17:00:04.054688    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 17:00:04.065508    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:00:04.065519    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:06.579896    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:11.582476    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:11.582568    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:11.594227    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 17:00:11.594313    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:11.605521    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 17:00:11.605600    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:11.616957    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 17:00:11.617032    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:11.630038    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 17:00:11.630127    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:11.641182    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 17:00:11.641250    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:11.653107    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 17:00:11.653174    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:11.664050    4145 logs.go:276] 0 containers: []
	W0815 17:00:11.664063    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:11.664121    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:11.675065    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 17:00:11.675083    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:11.675088    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:11.712760    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 17:00:11.712774    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 17:00:11.732347    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 17:00:11.732358    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 17:00:11.744307    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 17:00:11.744319    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 17:00:11.768630    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:11.768641    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:11.806567    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 17:00:11.806576    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 17:00:11.818156    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:11.818168    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:11.841863    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 17:00:11.841872    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 17:00:11.853671    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 17:00:11.853681    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 17:00:11.867479    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 17:00:11.867488    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 17:00:11.878813    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:00:11.878823    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:11.890717    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:11.890727    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:11.894806    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 17:00:11.894814    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 17:00:11.910160    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 17:00:11.910174    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 17:00:11.921253    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 17:00:11.921264    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 17:00:11.936242    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 17:00:11.936255    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 17:00:11.949852    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 17:00:11.949862    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 17:00:14.488401    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:19.490796    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:19.490896    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:00:19.501710    4145 logs.go:276] 2 containers: [d6b82cd4f040 0a558c6ba534]
	I0815 17:00:19.501793    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:00:19.512426    4145 logs.go:276] 2 containers: [f3ca05035c22 659d72bec753]
	I0815 17:00:19.512498    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:00:19.523654    4145 logs.go:276] 1 containers: [1d717a7d6892]
	I0815 17:00:19.523727    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:00:19.535470    4145 logs.go:276] 2 containers: [9061db02b38f 70b7213c6b52]
	I0815 17:00:19.535564    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:00:19.551172    4145 logs.go:276] 1 containers: [d306eaa13bb2]
	I0815 17:00:19.551244    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:00:19.566243    4145 logs.go:276] 2 containers: [be7d9bddbea3 83b99d5f50de]
	I0815 17:00:19.566314    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:00:19.577765    4145 logs.go:276] 0 containers: []
	W0815 17:00:19.577777    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:00:19.577838    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:00:19.589840    4145 logs.go:276] 2 containers: [442b8bba99a3 4e07e0ae96f2]
	I0815 17:00:19.589855    4145 logs.go:123] Gathering logs for kube-controller-manager [83b99d5f50de] ...
	I0815 17:00:19.589859    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83b99d5f50de"
	I0815 17:00:19.604587    4145 logs.go:123] Gathering logs for storage-provisioner [442b8bba99a3] ...
	I0815 17:00:19.604596    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 442b8bba99a3"
	I0815 17:00:19.616755    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:00:19.616772    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:00:19.641772    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:00:19.641782    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:00:19.681561    4145 logs.go:123] Gathering logs for kube-apiserver [0a558c6ba534] ...
	I0815 17:00:19.681572    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a558c6ba534"
	I0815 17:00:19.719865    4145 logs.go:123] Gathering logs for coredns [1d717a7d6892] ...
	I0815 17:00:19.719882    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d717a7d6892"
	I0815 17:00:19.731452    4145 logs.go:123] Gathering logs for storage-provisioner [4e07e0ae96f2] ...
	I0815 17:00:19.731464    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e07e0ae96f2"
	I0815 17:00:19.742825    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:00:19.742841    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:00:19.754965    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:00:19.754976    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:00:19.759408    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:00:19.759417    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:00:19.794033    4145 logs.go:123] Gathering logs for kube-scheduler [70b7213c6b52] ...
	I0815 17:00:19.794047    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b7213c6b52"
	I0815 17:00:19.809597    4145 logs.go:123] Gathering logs for kube-apiserver [d6b82cd4f040] ...
	I0815 17:00:19.809611    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6b82cd4f040"
	I0815 17:00:19.824845    4145 logs.go:123] Gathering logs for etcd [f3ca05035c22] ...
	I0815 17:00:19.824855    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ca05035c22"
	I0815 17:00:19.838702    4145 logs.go:123] Gathering logs for kube-proxy [d306eaa13bb2] ...
	I0815 17:00:19.838714    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d306eaa13bb2"
	I0815 17:00:19.850168    4145 logs.go:123] Gathering logs for kube-controller-manager [be7d9bddbea3] ...
	I0815 17:00:19.850181    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be7d9bddbea3"
	I0815 17:00:19.868068    4145 logs.go:123] Gathering logs for etcd [659d72bec753] ...
	I0815 17:00:19.868082    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 659d72bec753"
	I0815 17:00:19.883238    4145 logs.go:123] Gathering logs for kube-scheduler [9061db02b38f] ...
	I0815 17:00:19.883247    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9061db02b38f"
	I0815 17:00:22.396786    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:27.397935    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:27.397974    4145 kubeadm.go:597] duration metric: took 4m4.386610291s to restartPrimaryControlPlane
	W0815 17:00:27.398010    4145 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 17:00:27.398025    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0815 17:00:28.389587    4145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:00:28.394884    4145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:00:28.398018    4145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:00:28.401182    4145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:00:28.401189    4145 kubeadm.go:157] found existing configuration files:
	
	I0815 17:00:28.401217    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf
	I0815 17:00:28.403757    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:00:28.403787    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:00:28.406489    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf
	I0815 17:00:28.409463    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:00:28.409485    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:00:28.412300    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf
	I0815 17:00:28.414573    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:00:28.414597    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:00:28.417698    4145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf
	I0815 17:00:28.420245    4145 kubeadm.go:163] "https://control-plane.minikube.internal:50482" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50482 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:00:28.420266    4145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:00:28.422678    4145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 17:00:28.439633    4145 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0815 17:00:28.439713    4145 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:00:28.491576    4145 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:00:28.491716    4145 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:00:28.491772    4145 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 17:00:28.542340    4145 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:00:28.546502    4145 out.go:235]   - Generating certificates and keys ...
	I0815 17:00:28.546534    4145 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:00:28.546566    4145 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:00:28.546611    4145 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 17:00:28.546646    4145 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 17:00:28.546677    4145 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 17:00:28.546716    4145 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 17:00:28.546747    4145 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 17:00:28.546821    4145 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 17:00:28.546871    4145 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 17:00:28.546934    4145 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 17:00:28.546954    4145 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 17:00:28.546980    4145 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:00:28.664170    4145 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:00:28.785538    4145 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:00:28.831739    4145 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:00:28.951096    4145 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:00:28.981713    4145 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:00:28.982130    4145 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:00:28.982199    4145 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:00:29.068689    4145 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:00:29.072889    4145 out.go:235]   - Booting up control plane ...
	I0815 17:00:29.072935    4145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:00:29.073042    4145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:00:29.073140    4145 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:00:29.076031    4145 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:00:29.076945    4145 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 17:00:33.079120    4145 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001633 seconds
	I0815 17:00:33.079180    4145 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:00:33.082794    4145 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:00:33.595178    4145 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:00:33.595380    4145 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-889000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:00:34.098829    4145 kubeadm.go:310] [bootstrap-token] Using token: 2x6pd0.lf3zfx9c874ubs97
	I0815 17:00:34.105114    4145 out.go:235]   - Configuring RBAC rules ...
	I0815 17:00:34.105166    4145 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:00:34.105219    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:00:34.107250    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:00:34.108777    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:00:34.109655    4145 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:00:34.110485    4145 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:00:34.114816    4145 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:00:34.276315    4145 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:00:34.502743    4145 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:00:34.503217    4145 kubeadm.go:310] 
	I0815 17:00:34.503249    4145 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:00:34.503257    4145 kubeadm.go:310] 
	I0815 17:00:34.503299    4145 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:00:34.503304    4145 kubeadm.go:310] 
	I0815 17:00:34.503317    4145 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:00:34.503346    4145 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:00:34.503368    4145 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:00:34.503370    4145 kubeadm.go:310] 
	I0815 17:00:34.503396    4145 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:00:34.503399    4145 kubeadm.go:310] 
	I0815 17:00:34.503421    4145 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:00:34.503424    4145 kubeadm.go:310] 
	I0815 17:00:34.503451    4145 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:00:34.503486    4145 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:00:34.503521    4145 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:00:34.503526    4145 kubeadm.go:310] 
	I0815 17:00:34.503566    4145 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:00:34.503633    4145 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:00:34.503638    4145 kubeadm.go:310] 
	I0815 17:00:34.503683    4145 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2x6pd0.lf3zfx9c874ubs97 \
	I0815 17:00:34.503730    4145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e \
	I0815 17:00:34.503740    4145 kubeadm.go:310] 	--control-plane 
	I0815 17:00:34.503742    4145 kubeadm.go:310] 
	I0815 17:00:34.503785    4145 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:00:34.503790    4145 kubeadm.go:310] 
	I0815 17:00:34.503839    4145 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2x6pd0.lf3zfx9c874ubs97 \
	I0815 17:00:34.503888    4145 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88912a497139cdcb80d3af465e15c830e797440a4ec3ed41d3c948a9662aad9e 
	I0815 17:00:34.504021    4145 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:00:34.504033    4145 cni.go:84] Creating CNI manager for ""
	I0815 17:00:34.504043    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:00:34.507174    4145 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 17:00:34.514050    4145 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 17:00:34.517053    4145 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 17:00:34.522009    4145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:00:34.522056    4145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:00:34.522056    4145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-889000 minikube.k8s.io/updated_at=2024_08_15T17_00_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=stopped-upgrade-889000 minikube.k8s.io/primary=true
	I0815 17:00:34.525240    4145 ops.go:34] apiserver oom_adj: -16
	I0815 17:00:34.552882    4145 kubeadm.go:1113] duration metric: took 30.866584ms to wait for elevateKubeSystemPrivileges
	I0815 17:00:34.566441    4145 kubeadm.go:394] duration metric: took 4m11.568617208s to StartCluster
	I0815 17:00:34.566462    4145 settings.go:142] acquiring lock: {Name:mk3ef55eecb064d007fbd1b55ea891b5b51acd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:00:34.566545    4145 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:00:34.567008    4145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/kubeconfig: {Name:mk7594709ce290a3e032dc58c8ec366ac5a2a141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:00:34.567204    4145 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:00:34.567299    4145 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:00:34.567283    4145 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 17:00:34.567319    4145 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-889000"
	I0815 17:00:34.567334    4145 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-889000"
	I0815 17:00:34.567336    4145 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-889000"
	W0815 17:00:34.567338    4145 addons.go:243] addon storage-provisioner should already be in state true
	I0815 17:00:34.567346    4145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-889000"
	I0815 17:00:34.567351    4145 host.go:66] Checking if "stopped-upgrade-889000" exists ...
	I0815 17:00:34.568230    4145 kapi.go:59] client config for stopped-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/profiles/stopped-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 17:00:34.568348    4145 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-889000"
	W0815 17:00:34.568353    4145 addons.go:243] addon default-storageclass should already be in state true
	I0815 17:00:34.568359    4145 host.go:66] Checking if "stopped-upgrade-889000" exists ...
	I0815 17:00:34.570990    4145 out.go:177] * Verifying Kubernetes components...
	I0815 17:00:34.571325    4145 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:00:34.575100    4145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:00:34.575109    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 17:00:34.578951    4145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:00:34.583003    4145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:00:34.587074    4145 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:00:34.587080    4145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:00:34.587087    4145 sshutil.go:53] new ssh client: &{IP:localhost Port:50447 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/stopped-upgrade-889000/id_rsa Username:docker}
	I0815 17:00:34.660638    4145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:00:34.667436    4145 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:00:34.667493    4145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:00:34.673329    4145 api_server.go:72] duration metric: took 106.113916ms to wait for apiserver process to appear ...
	I0815 17:00:34.673337    4145 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:00:34.673343    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:34.675851    4145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:00:34.707908    4145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:00:35.020799    4145 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 17:00:35.020812    4145 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 17:00:39.675511    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:39.675559    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:44.675956    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:44.675996    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:49.676441    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:49.676501    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:54.677051    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:54.677105    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:00:59.678125    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:00:59.678176    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:04.679265    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:04.679326    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0815 17:01:05.023521    4145 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0815 17:01:05.027844    4145 out.go:177] * Enabled addons: storage-provisioner
	I0815 17:01:05.035761    4145 addons.go:510] duration metric: took 30.468139834s for enable addons: enabled=[storage-provisioner]
	I0815 17:01:09.680575    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:09.680661    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:14.682798    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:14.682839    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:19.684988    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:19.685047    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:24.687336    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:24.687373    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:29.689795    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:29.689877    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:34.692619    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:34.693013    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:01:34.745560    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:01:34.745692    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:01:34.775358    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:01:34.775435    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:01:34.788139    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:01:34.788213    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:01:34.798596    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:01:34.798663    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:01:34.809407    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:01:34.809482    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:01:34.819615    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:01:34.819679    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:01:34.829679    4145 logs.go:276] 0 containers: []
	W0815 17:01:34.829689    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:01:34.829741    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:01:34.840197    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:01:34.840214    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:01:34.840220    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:01:34.852121    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:01:34.852135    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:01:34.871574    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:01:34.871587    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:01:34.883037    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:01:34.883050    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:01:34.907628    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:01:34.907638    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:01:34.918621    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:01:34.918632    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:01:34.923088    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:01:34.923099    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:01:34.957176    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:01:34.957189    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:01:34.971134    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:01:34.971148    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:01:34.983242    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:01:34.983253    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:01:34.998187    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:01:34.998197    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:01:35.034833    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:01:35.034842    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:01:35.051557    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:01:35.051571    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:01:37.565028    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:42.567978    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:42.568449    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:01:42.603889    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:01:42.604014    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:01:42.627739    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:01:42.627858    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:01:42.648955    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:01:42.649027    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:01:42.660323    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:01:42.660391    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:01:42.671392    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:01:42.671463    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:01:42.681559    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:01:42.681626    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:01:42.694061    4145 logs.go:276] 0 containers: []
	W0815 17:01:42.694072    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:01:42.694126    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:01:42.704562    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:01:42.704580    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:01:42.704586    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:01:42.717041    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:01:42.717055    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:01:42.737482    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:01:42.737494    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:01:42.751916    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:01:42.751925    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:01:42.763932    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:01:42.763945    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:01:42.801876    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:01:42.801885    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:01:42.836093    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:01:42.836104    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:01:42.850805    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:01:42.850816    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:01:42.864798    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:01:42.864807    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:01:42.878142    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:01:42.878152    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:01:42.903292    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:01:42.903305    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:01:42.915768    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:01:42.915781    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:01:42.921733    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:01:42.921744    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:01:45.442797    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:50.444594    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:50.444968    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:01:50.478712    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:01:50.478832    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:01:50.497871    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:01:50.497957    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:01:50.512760    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:01:50.512843    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:01:50.525207    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:01:50.525268    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:01:50.535461    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:01:50.535523    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:01:50.546307    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:01:50.546377    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:01:50.556598    4145 logs.go:276] 0 containers: []
	W0815 17:01:50.556609    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:01:50.556668    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:01:50.566981    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:01:50.566996    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:01:50.567002    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:01:50.578870    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:01:50.578883    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:01:50.593806    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:01:50.593817    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:01:50.617219    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:01:50.617225    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:01:50.621344    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:01:50.621352    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:01:50.655767    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:01:50.655781    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:01:50.669816    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:01:50.669828    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:01:50.683316    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:01:50.683329    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:01:50.700143    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:01:50.700153    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:01:50.711441    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:01:50.711452    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:01:50.750093    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:01:50.750104    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:01:50.761879    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:01:50.761888    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:01:50.776594    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:01:50.776608    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:01:53.290374    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:01:58.291909    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:01:58.292337    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:01:58.331686    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:01:58.331839    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:01:58.353616    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:01:58.353724    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:01:58.369383    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:01:58.369467    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:01:58.383319    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:01:58.383399    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:01:58.394155    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:01:58.394229    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:01:58.404334    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:01:58.404402    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:01:58.414215    4145 logs.go:276] 0 containers: []
	W0815 17:01:58.414227    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:01:58.414287    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:01:58.424675    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:01:58.424690    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:01:58.424696    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:01:58.462422    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:01:58.462431    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:01:58.477900    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:01:58.477909    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:01:58.491435    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:01:58.491446    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:01:58.503357    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:01:58.503371    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:01:58.518080    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:01:58.518094    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:01:58.529467    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:01:58.529479    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:01:58.551799    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:01:58.551810    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:01:58.576320    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:01:58.576328    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:01:58.587125    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:01:58.587136    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:01:58.591753    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:01:58.591761    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:01:58.625039    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:01:58.625053    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:01:58.637169    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:01:58.637182    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:01.149008    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:06.151816    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:06.152279    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:06.196240    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:06.196380    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:06.217435    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:06.217524    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:06.235498    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:02:06.235569    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:06.247409    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:06.247482    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:06.257969    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:06.258039    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:06.268710    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:06.268771    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:06.279750    4145 logs.go:276] 0 containers: []
	W0815 17:02:06.279761    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:06.279813    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:06.290436    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:06.290451    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:06.290459    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:06.324814    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:06.324827    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:06.338538    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:06.338551    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:06.350036    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:06.350050    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:06.361552    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:06.361562    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:06.378659    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:06.378671    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:06.390378    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:06.390390    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:06.415493    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:06.415501    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:06.453681    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:06.453688    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:06.457747    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:06.457755    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:06.471450    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:06.471464    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:06.483605    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:06.483619    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:06.497706    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:06.497718    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:09.011569    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:14.014391    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:14.014943    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:14.058177    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:14.058326    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:14.078675    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:14.078791    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:14.098839    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:02:14.098916    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:14.110130    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:14.110199    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:14.121548    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:14.121620    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:14.131961    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:14.132023    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:14.142359    4145 logs.go:276] 0 containers: []
	W0815 17:02:14.142372    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:14.142425    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:14.157101    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:14.157114    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:14.157119    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:14.161337    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:14.161342    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:14.178639    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:14.178653    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:14.193914    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:14.193927    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:14.209979    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:14.209993    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:14.227249    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:14.227260    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:14.239250    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:14.239263    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:14.263965    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:14.263975    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:14.302389    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:14.302401    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:14.337476    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:14.337490    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:14.352303    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:14.352317    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:14.366164    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:14.366178    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:14.378483    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:14.378497    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:16.891790    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:21.894336    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:21.894663    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:21.927314    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:21.927451    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:21.952713    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:21.952791    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:21.964648    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:02:21.964736    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:21.975365    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:21.975434    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:21.985788    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:21.985857    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:21.996051    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:21.996122    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:22.005817    4145 logs.go:276] 0 containers: []
	W0815 17:02:22.005828    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:22.005885    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:22.015852    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:22.015865    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:22.015878    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:22.027331    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:22.027344    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:22.038625    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:22.038638    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:22.078315    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:22.078326    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:22.083076    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:22.083081    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:22.096937    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:22.096949    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:22.111448    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:22.111457    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:22.123183    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:22.123195    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:22.140721    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:22.140732    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:22.177035    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:22.177048    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:22.192687    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:22.192699    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:22.204683    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:22.204694    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:22.215709    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:22.215718    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:24.741042    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:29.743359    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:29.743857    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:29.781763    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:29.781910    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:29.803106    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:29.803196    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:29.817934    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:02:29.818011    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:29.830059    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:29.830128    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:29.841231    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:29.841302    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:29.851811    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:29.851872    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:29.863956    4145 logs.go:276] 0 containers: []
	W0815 17:02:29.863969    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:29.864026    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:29.874360    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:29.874375    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:29.874381    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:29.888487    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:29.888501    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:29.902181    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:29.902193    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:29.913610    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:29.913623    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:29.924726    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:29.924736    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:29.949408    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:29.949418    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:29.953583    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:29.953591    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:29.987606    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:29.987621    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:29.999551    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:29.999566    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:30.020467    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:30.020481    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:30.037624    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:30.037633    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:30.049358    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:30.049369    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:30.061038    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:30.061050    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:32.598915    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:37.601454    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:37.601748    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:37.630585    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:37.630708    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:37.649686    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:37.649765    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:37.663255    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:02:37.663320    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:37.674817    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:37.674879    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:37.684855    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:37.684923    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:37.695283    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:37.695339    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:37.705780    4145 logs.go:276] 0 containers: []
	W0815 17:02:37.705790    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:37.705842    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:37.716111    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:37.716128    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:37.716133    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:37.727243    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:37.727256    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:37.738481    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:37.738494    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:37.749803    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:37.749814    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:37.754619    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:37.754628    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:37.789972    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:37.789985    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:37.810958    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:37.810968    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:37.824815    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:37.824828    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:37.849508    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:37.849516    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:37.861623    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:37.861637    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:37.900628    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:37.900635    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:37.915507    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:37.915519    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:37.927421    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:37.927436    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:40.445533    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:45.446646    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:45.446708    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:45.458360    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:45.458424    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:45.469867    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:45.469932    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:45.486673    4145 logs.go:276] 2 containers: [abaa8a2d3441 6d4557baed9c]
	I0815 17:02:45.486723    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:45.497256    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:45.497317    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:45.509003    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:45.509059    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:45.521393    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:45.521449    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:45.536569    4145 logs.go:276] 0 containers: []
	W0815 17:02:45.536582    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:45.536626    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:45.547910    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:45.547928    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:45.547933    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:45.567216    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:45.567227    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:45.579118    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:45.579130    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:45.597128    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:45.597139    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:45.613036    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:45.613045    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:45.652486    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:45.652496    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:45.667411    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:45.667425    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:45.682907    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:45.682919    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:45.696011    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:45.696019    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:45.707753    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:45.707764    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:45.731779    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:45.731798    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:45.769638    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:45.769656    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:45.774969    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:45.774979    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:48.289807    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:02:53.292726    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:02:53.293155    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:02:53.331558    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:02:53.331692    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:02:53.352689    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:02:53.352801    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:02:53.372440    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:02:53.372523    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:02:53.384154    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:02:53.384227    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:02:53.395818    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:02:53.395892    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:02:53.406135    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:02:53.406201    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:02:53.416150    4145 logs.go:276] 0 containers: []
	W0815 17:02:53.416161    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:02:53.416219    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:02:53.426348    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:02:53.426367    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:02:53.426372    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:02:53.437342    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:02:53.437352    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:02:53.449976    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:02:53.449989    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:02:53.486651    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:02:53.486662    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:02:53.491084    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:02:53.491089    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:02:53.504789    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:02:53.504801    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:02:53.515949    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:02:53.515964    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:02:53.539587    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:02:53.539600    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:02:53.551875    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:02:53.551885    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:02:53.566280    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:02:53.566291    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:02:53.581102    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:02:53.581116    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:02:53.593563    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:02:53.593581    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:02:53.618385    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:02:53.618402    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:02:53.658438    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:02:53.658451    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:02:53.671690    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:02:53.671701    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:02:56.186729    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:01.181433    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:01.181871    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:01.221368    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:01.221502    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:01.243333    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:01.243446    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:01.259581    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:01.259667    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:01.272481    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:01.272551    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:01.283685    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:01.283755    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:01.294247    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:01.294314    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:01.304402    4145 logs.go:276] 0 containers: []
	W0815 17:03:01.304412    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:01.304468    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:01.316786    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:01.316803    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:01.316811    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:01.342446    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:01.342456    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:01.354141    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:01.354152    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:01.366456    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:01.366466    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:01.377873    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:01.377886    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:01.391888    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:01.391898    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:01.406127    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:01.406139    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:01.419564    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:01.419577    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:01.441593    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:01.441606    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:01.453510    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:01.453524    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:01.470826    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:01.470836    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:01.474953    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:01.474962    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:01.509910    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:01.509926    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:01.524433    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:01.524443    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:01.561553    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:01.561560    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:04.073046    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:09.071168    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:09.071242    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:09.083397    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:09.083458    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:09.095318    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:09.095365    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:09.106302    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:09.106362    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:09.121102    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:09.121187    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:09.131816    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:09.131879    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:09.143642    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:09.143716    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:09.154446    4145 logs.go:276] 0 containers: []
	W0815 17:03:09.154458    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:09.154515    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:09.165960    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:09.165982    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:09.165988    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:09.202777    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:09.202797    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:09.219747    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:09.219765    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:09.234846    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:09.234856    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:09.253351    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:09.253361    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:09.292285    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:09.292302    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:09.311516    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:09.311526    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:09.325497    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:09.325510    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:09.347913    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:09.347924    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:09.375274    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:09.375294    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:09.388617    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:09.388633    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:09.403908    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:09.403919    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:09.415821    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:09.415833    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:09.428788    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:09.428799    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:09.441752    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:09.441762    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:11.946981    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:16.946817    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:16.947264    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:16.987057    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:16.987178    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:17.006648    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:17.006754    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:17.020803    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:17.020879    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:17.032811    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:17.032886    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:17.043597    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:17.043670    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:17.055635    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:17.055696    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:17.065334    4145 logs.go:276] 0 containers: []
	W0815 17:03:17.065347    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:17.065406    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:17.075936    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:17.075954    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:17.075959    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:17.109234    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:17.109244    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:17.121033    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:17.121046    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:17.132178    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:17.132190    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:17.146652    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:17.146666    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:17.159572    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:17.159585    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:17.164039    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:17.164045    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:17.185805    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:17.185818    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:17.197448    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:17.197461    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:17.209132    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:17.209142    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:17.233100    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:17.233107    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:17.269001    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:17.269009    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:17.280697    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:17.280711    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:17.297968    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:17.297980    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:17.312008    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:17.312019    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:19.824786    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:24.825963    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:24.826107    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:24.859274    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:24.859388    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:24.872871    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:24.872943    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:24.890122    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:24.890186    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:24.904523    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:24.904591    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:24.920616    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:24.920692    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:24.930746    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:24.930812    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:24.941413    4145 logs.go:276] 0 containers: []
	W0815 17:03:24.941424    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:24.941481    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:24.951371    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:24.951388    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:24.951393    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:24.966064    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:24.966077    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:24.980622    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:24.980634    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:24.985318    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:24.985327    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:24.996723    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:24.996734    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:25.008332    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:25.008346    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:25.019715    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:25.019726    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:25.042937    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:25.042949    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:25.059650    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:25.059660    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:25.096197    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:25.096206    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:25.107379    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:25.107394    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:25.121108    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:25.121121    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:25.138187    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:25.138199    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:25.149060    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:25.149074    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:25.184682    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:25.184697    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:27.700531    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:32.701604    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:32.701671    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:32.712132    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:32.712189    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:32.723765    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:32.723823    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:32.735008    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:32.735079    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:32.746902    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:32.746949    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:32.759798    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:32.759864    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:32.773521    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:32.773578    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:32.797240    4145 logs.go:276] 0 containers: []
	W0815 17:03:32.797251    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:32.797305    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:32.813287    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:32.813306    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:32.813311    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:32.837880    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:32.837894    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:32.864509    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:32.864523    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:32.880336    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:32.880349    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:32.895669    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:32.895681    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:32.908163    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:32.908179    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:32.920727    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:32.920741    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:32.933880    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:32.933892    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:32.946813    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:32.946825    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:32.984683    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:32.984696    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:33.000673    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:33.000685    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:33.013450    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:33.013461    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:33.027995    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:33.028008    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:33.039731    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:33.039744    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:33.078331    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:33.078351    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:35.585375    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:40.586013    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:40.586447    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:40.626225    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:40.626363    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:40.648846    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:40.648948    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:40.664378    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:40.664444    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:40.676997    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:40.677068    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:40.687597    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:40.687664    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:40.698237    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:40.698295    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:40.708277    4145 logs.go:276] 0 containers: []
	W0815 17:03:40.708288    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:40.708334    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:40.723320    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:40.723339    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:40.723344    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:40.727469    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:40.727474    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:40.738938    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:40.738950    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:40.775540    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:40.775549    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:40.790532    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:40.790544    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:40.802531    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:40.802542    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:40.817218    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:40.817230    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:40.835605    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:40.835617    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:40.847429    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:40.847443    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:40.862446    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:40.862458    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:40.874615    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:40.874625    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:40.886280    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:40.886293    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:40.898403    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:40.898417    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:40.910791    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:40.910800    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:40.940916    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:40.940928    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:43.477800    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:48.480215    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:48.480466    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:48.506197    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:48.506307    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:48.523971    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:48.524055    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:48.538055    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:48.538128    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:48.549548    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:48.549618    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:48.559739    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:48.559797    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:48.569935    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:48.569989    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:48.579870    4145 logs.go:276] 0 containers: []
	W0815 17:03:48.579878    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:48.579924    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:48.589852    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:48.589870    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:48.589875    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:48.624156    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:48.624170    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:48.636254    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:48.636266    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:48.647524    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:48.647534    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:48.661492    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:48.661505    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:48.676388    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:48.676398    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:48.681087    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:48.681095    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:48.692606    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:48.692619    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:48.704133    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:48.704141    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:48.740384    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:48.740393    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:48.754968    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:48.754981    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:48.770593    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:48.770605    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:48.781994    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:48.782008    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:48.799588    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:48.799598    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:48.812912    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:48.812924    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:51.340628    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:03:56.341957    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:03:56.342040    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:03:56.357457    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:03:56.357547    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:03:56.368762    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:03:56.368822    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:03:56.380601    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:03:56.380681    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:03:56.392170    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:03:56.392247    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:03:56.407974    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:03:56.408035    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:03:56.419318    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:03:56.419388    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:03:56.430971    4145 logs.go:276] 0 containers: []
	W0815 17:03:56.430984    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:03:56.431031    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:03:56.444346    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:03:56.444369    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:03:56.444375    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:03:56.456925    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:03:56.456937    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:03:56.475989    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:03:56.476002    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:03:56.489913    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:03:56.489927    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:03:56.505336    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:03:56.505348    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:03:56.520232    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:03:56.520245    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:03:56.533283    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:03:56.533295    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:03:56.572993    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:03:56.573010    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:03:56.586009    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:03:56.586022    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:03:56.598924    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:03:56.598934    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:03:56.603216    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:03:56.603223    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:03:56.641011    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:03:56.641020    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:03:56.653545    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:03:56.653555    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:03:56.668702    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:03:56.668715    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:03:56.680842    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:03:56.680854    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:03:59.209085    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:04:04.211330    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:04:04.211803    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:04:04.251298    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:04:04.251435    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:04:04.273806    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:04:04.273913    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:04:04.289210    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:04:04.289294    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:04:04.301386    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:04:04.301464    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:04:04.312083    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:04:04.312145    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:04:04.322836    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:04:04.322911    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:04:04.334411    4145 logs.go:276] 0 containers: []
	W0815 17:04:04.334422    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:04:04.334475    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:04:04.350142    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:04:04.350161    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:04:04.350166    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:04:04.364245    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:04:04.364257    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:04:04.382219    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:04:04.382232    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:04:04.420149    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:04:04.420158    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:04:04.455497    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:04:04.455511    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:04:04.469230    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:04:04.469244    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:04:04.486732    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:04:04.486744    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:04:04.500624    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:04:04.500638    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:04:04.511944    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:04:04.511953    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:04:04.522857    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:04:04.522871    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:04:04.535142    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:04:04.535154    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:04:04.546644    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:04:04.546657    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:04:04.569789    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:04:04.569796    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:04:04.573854    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:04:04.573861    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:04:04.585446    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:04:04.585460    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:04:07.105298    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:04:12.107904    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:04:12.108093    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:04:12.123921    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:04:12.123995    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:04:12.136665    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:04:12.136734    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:04:12.155562    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:04:12.155634    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:04:12.166169    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:04:12.166240    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:04:12.176339    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:04:12.176395    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:04:12.190496    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:04:12.190560    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:04:12.216600    4145 logs.go:276] 0 containers: []
	W0815 17:04:12.216611    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:04:12.216662    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:04:12.228567    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:04:12.228585    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:04:12.228591    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:04:12.245522    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:04:12.245534    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:04:12.261277    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:04:12.261289    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:04:12.272775    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:04:12.272789    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:04:12.289062    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:04:12.289076    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:04:12.303398    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:04:12.303410    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:04:12.315147    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:04:12.315157    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:04:12.328904    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:04:12.328917    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:04:12.343298    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:04:12.343312    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:04:12.356383    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:04:12.356394    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:04:12.379411    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:04:12.379420    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:04:12.415479    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:04:12.415488    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:04:12.427492    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:04:12.427503    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:04:12.439116    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:04:12.439130    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:04:12.443833    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:04:12.443843    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:04:14.981110    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:04:19.982572    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:04:19.983037    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:04:20.023890    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:04:20.024018    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:04:20.045462    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:04:20.045587    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:04:20.060362    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:04:20.060449    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:04:20.072782    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:04:20.072852    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:04:20.083493    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:04:20.083557    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:04:20.094270    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:04:20.094335    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:04:20.104938    4145 logs.go:276] 0 containers: []
	W0815 17:04:20.104949    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:04:20.105002    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:04:20.115564    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:04:20.115579    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:04:20.115587    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:04:20.127105    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:04:20.127119    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:04:20.144293    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:04:20.144305    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:04:20.155500    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:04:20.155513    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:04:20.166838    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:04:20.166851    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:04:20.178334    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:04:20.178348    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:04:20.199308    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:04:20.199318    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:04:20.224413    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:04:20.224422    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:04:20.262137    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:04:20.262145    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:04:20.274169    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:04:20.274183    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:04:20.285636    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:04:20.285648    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:04:20.321182    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:04:20.321197    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:04:20.335540    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:04:20.335551    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:04:20.349992    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:04:20.350006    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:04:20.354607    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:04:20.354617    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:04:22.870458    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:04:27.872695    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:04:27.872963    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 17:04:27.897047    4145 logs.go:276] 1 containers: [ed44582fc466]
	I0815 17:04:27.897157    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 17:04:27.916905    4145 logs.go:276] 1 containers: [e97f427c3fe9]
	I0815 17:04:27.916966    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 17:04:27.929437    4145 logs.go:276] 4 containers: [f8060fe3fe5c 332167daa567 abaa8a2d3441 6d4557baed9c]
	I0815 17:04:27.929511    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 17:04:27.940194    4145 logs.go:276] 1 containers: [825efb79b2bd]
	I0815 17:04:27.940250    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 17:04:27.950124    4145 logs.go:276] 1 containers: [e40932e3c30b]
	I0815 17:04:27.950194    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 17:04:27.960823    4145 logs.go:276] 1 containers: [5e64b2ae5b70]
	I0815 17:04:27.960885    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 17:04:27.970531    4145 logs.go:276] 0 containers: []
	W0815 17:04:27.970543    4145 logs.go:278] No container was found matching "kindnet"
	I0815 17:04:27.970602    4145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 17:04:27.981047    4145 logs.go:276] 1 containers: [d7e3b121d03b]
	I0815 17:04:27.981065    4145 logs.go:123] Gathering logs for kubelet ...
	I0815 17:04:27.981071    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:04:28.017708    4145 logs.go:123] Gathering logs for kube-apiserver [ed44582fc466] ...
	I0815 17:04:28.017717    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed44582fc466"
	I0815 17:04:28.031528    4145 logs.go:123] Gathering logs for coredns [6d4557baed9c] ...
	I0815 17:04:28.031540    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d4557baed9c"
	I0815 17:04:28.043503    4145 logs.go:123] Gathering logs for dmesg ...
	I0815 17:04:28.043517    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:04:28.049998    4145 logs.go:123] Gathering logs for kube-scheduler [825efb79b2bd] ...
	I0815 17:04:28.050010    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825efb79b2bd"
	I0815 17:04:28.064952    4145 logs.go:123] Gathering logs for kube-controller-manager [5e64b2ae5b70] ...
	I0815 17:04:28.064963    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e64b2ae5b70"
	I0815 17:04:28.082393    4145 logs.go:123] Gathering logs for storage-provisioner [d7e3b121d03b] ...
	I0815 17:04:28.082403    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e3b121d03b"
	I0815 17:04:28.094171    4145 logs.go:123] Gathering logs for container status ...
	I0815 17:04:28.094185    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:04:28.105860    4145 logs.go:123] Gathering logs for etcd [e97f427c3fe9] ...
	I0815 17:04:28.105874    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e97f427c3fe9"
	I0815 17:04:28.119423    4145 logs.go:123] Gathering logs for coredns [abaa8a2d3441] ...
	I0815 17:04:28.119434    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abaa8a2d3441"
	I0815 17:04:28.131439    4145 logs.go:123] Gathering logs for kube-proxy [e40932e3c30b] ...
	I0815 17:04:28.131454    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40932e3c30b"
	I0815 17:04:28.143053    4145 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:04:28.143066    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:04:28.177390    4145 logs.go:123] Gathering logs for coredns [f8060fe3fe5c] ...
	I0815 17:04:28.177403    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8060fe3fe5c"
	I0815 17:04:28.188958    4145 logs.go:123] Gathering logs for coredns [332167daa567] ...
	I0815 17:04:28.188970    4145 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 332167daa567"
	I0815 17:04:28.200994    4145 logs.go:123] Gathering logs for Docker ...
	I0815 17:04:28.201006    4145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 17:04:30.724779    4145 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 17:04:35.727074    4145 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 17:04:35.733717    4145 out.go:201] 
	W0815 17:04:35.738798    4145 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0815 17:04:35.738832    4145 out.go:270] * 
	* 
	W0815 17:04:35.741317    4145 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:04:35.750779    4145 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (582.61s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-831000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-831000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.874633959s)

                                                
                                                
-- stdout --
	* [pause-831000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-831000" primary control-plane node in "pause-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-831000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-831000 -n pause-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-831000 -n pause-831000: exit status 7 (46.919917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-255000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-255000 --driver=qemu2 : exit status 80 (9.743330583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-255000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-255000" primary control-plane node in "NoKubernetes-255000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-255000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-255000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000: exit status 7 (33.001916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --driver=qemu2 : exit status 80 (5.229672958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-255000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-255000
	* Restarting existing qemu2 VM for "NoKubernetes-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-255000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000: exit status 7 (33.930166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245905083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-255000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-255000
	* Restarting existing qemu2 VM for "NoKubernetes-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-255000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000: exit status 7 (52.969667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-255000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-255000 --driver=qemu2 : exit status 80 (5.270698666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-255000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-255000
	* Restarting existing qemu2 VM for "NoKubernetes-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-255000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-255000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-255000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-255000 -n NoKubernetes-255000: exit status 7 (51.990834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.865313292s)

                                                
                                                
-- stdout --
	* [auto-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-972000" primary control-plane node in "auto-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:02:34.192033    4626 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:02:34.192156    4626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:02:34.192163    4626 out.go:358] Setting ErrFile to fd 2...
	I0815 17:02:34.192166    4626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:02:34.192282    4626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:02:34.193391    4626 out.go:352] Setting JSON to false
	I0815 17:02:34.210015    4626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3722,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:02:34.210084    4626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:02:34.215194    4626 out.go:177] * [auto-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:02:34.223102    4626 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:02:34.223156    4626 notify.go:220] Checking for updates...
	I0815 17:02:34.228957    4626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:02:34.232022    4626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:02:34.233255    4626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:02:34.236059    4626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:02:34.243043    4626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:02:34.246354    4626 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:02:34.246424    4626 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:02:34.246466    4626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:02:34.249964    4626 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:02:34.257048    4626 start.go:297] selected driver: qemu2
	I0815 17:02:34.257055    4626 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:02:34.257060    4626 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:02:34.259264    4626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:02:34.261995    4626 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:02:34.265202    4626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:02:34.265260    4626 cni.go:84] Creating CNI manager for ""
	I0815 17:02:34.265268    4626 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:02:34.265273    4626 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:02:34.265316    4626 start.go:340] cluster config:
	{Name:auto-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:02:34.268845    4626 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:02:34.275965    4626 out.go:177] * Starting "auto-972000" primary control-plane node in "auto-972000" cluster
	I0815 17:02:34.280018    4626 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:02:34.280032    4626 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:02:34.280043    4626 cache.go:56] Caching tarball of preloaded images
	I0815 17:02:34.280104    4626 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:02:34.280109    4626 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:02:34.280162    4626 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/auto-972000/config.json ...
	I0815 17:02:34.280172    4626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/auto-972000/config.json: {Name:mkd5d7848aa645b5efe45fbd93e6e3aaa66daff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:02:34.280495    4626 start.go:360] acquireMachinesLock for auto-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:02:34.280526    4626 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "auto-972000"
	I0815 17:02:34.280538    4626 start.go:93] Provisioning new machine with config: &{Name:auto-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:02:34.280564    4626 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:02:34.289025    4626 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:02:34.305610    4626 start.go:159] libmachine.API.Create for "auto-972000" (driver="qemu2")
	I0815 17:02:34.305652    4626 client.go:168] LocalClient.Create starting
	I0815 17:02:34.305723    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:02:34.305753    4626 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:34.305762    4626 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:34.305805    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:02:34.305828    4626 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:34.305836    4626 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:34.306207    4626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:02:34.458961    4626 main.go:141] libmachine: Creating SSH key...
	I0815 17:02:34.529844    4626 main.go:141] libmachine: Creating Disk image...
	I0815 17:02:34.529850    4626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:02:34.530050    4626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2
	I0815 17:02:34.539182    4626 main.go:141] libmachine: STDOUT: 
	I0815 17:02:34.539199    4626 main.go:141] libmachine: STDERR: 
	I0815 17:02:34.539247    4626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2 +20000M
	I0815 17:02:34.547902    4626 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:02:34.547920    4626 main.go:141] libmachine: STDERR: 
	I0815 17:02:34.547936    4626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2
	I0815 17:02:34.547942    4626 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:02:34.547953    4626 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:02:34.547981    4626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:71:78:47:19:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2
	I0815 17:02:34.549901    4626 main.go:141] libmachine: STDOUT: 
	I0815 17:02:34.549923    4626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:02:34.549948    4626 client.go:171] duration metric: took 244.288084ms to LocalClient.Create
	I0815 17:02:36.552066    4626 start.go:128] duration metric: took 2.271466042s to createHost
	I0815 17:02:36.552109    4626 start.go:83] releasing machines lock for "auto-972000", held for 2.271552375s
	W0815 17:02:36.552138    4626 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:02:36.564126    4626 out.go:177] * Deleting "auto-972000" in qemu2 ...
	W0815 17:02:36.578144    4626 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:02:36.578154    4626 start.go:729] Will try again in 5 seconds ...
	I0815 17:02:41.580392    4626 start.go:360] acquireMachinesLock for auto-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:02:41.581018    4626 start.go:364] duration metric: took 541.875µs to acquireMachinesLock for "auto-972000"
	I0815 17:02:41.581091    4626 start.go:93] Provisioning new machine with config: &{Name:auto-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:02:41.581333    4626 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:02:41.590822    4626 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:02:41.640593    4626 start.go:159] libmachine.API.Create for "auto-972000" (driver="qemu2")
	I0815 17:02:41.640652    4626 client.go:168] LocalClient.Create starting
	I0815 17:02:41.640775    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:02:41.640841    4626 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:41.640856    4626 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:41.640932    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:02:41.640976    4626 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:41.640998    4626 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:41.641516    4626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:02:41.801933    4626 main.go:141] libmachine: Creating SSH key...
	I0815 17:02:41.968681    4626 main.go:141] libmachine: Creating Disk image...
	I0815 17:02:41.968694    4626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:02:41.968967    4626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2
	I0815 17:02:41.978438    4626 main.go:141] libmachine: STDOUT: 
	I0815 17:02:41.978457    4626 main.go:141] libmachine: STDERR: 
	I0815 17:02:41.978502    4626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2 +20000M
	I0815 17:02:41.986508    4626 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:02:41.986527    4626 main.go:141] libmachine: STDERR: 
	I0815 17:02:41.986538    4626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2
	I0815 17:02:41.986543    4626 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:02:41.986552    4626 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:02:41.986590    4626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:97:bf:72:14:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/auto-972000/disk.qcow2
	I0815 17:02:41.988236    4626 main.go:141] libmachine: STDOUT: 
	I0815 17:02:41.988261    4626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:02:41.988273    4626 client.go:171] duration metric: took 347.612709ms to LocalClient.Create
	I0815 17:02:43.988520    4626 start.go:128] duration metric: took 2.407120708s to createHost
	I0815 17:02:43.988646    4626 start.go:83] releasing machines lock for "auto-972000", held for 2.407564s
	W0815 17:02:43.989048    4626 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:02:43.997641    4626 out.go:201] 
	W0815 17:02:44.004768    4626 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:02:44.004816    4626 out.go:270] * 
	* 
	W0815 17:02:44.007350    4626 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:02:44.015682    4626 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.708913459s)

                                                
                                                
-- stdout --
	* [kindnet-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-972000" primary control-plane node in "kindnet-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:02:46.200098    4735 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:02:46.200239    4735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:02:46.200242    4735 out.go:358] Setting ErrFile to fd 2...
	I0815 17:02:46.200245    4735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:02:46.200395    4735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:02:46.201526    4735 out.go:352] Setting JSON to false
	I0815 17:02:46.217738    4735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3734,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:02:46.217804    4735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:02:46.223561    4735 out.go:177] * [kindnet-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:02:46.231565    4735 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:02:46.231616    4735 notify.go:220] Checking for updates...
	I0815 17:02:46.238543    4735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:02:46.241538    4735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:02:46.244547    4735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:02:46.247518    4735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:02:46.250516    4735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:02:46.253843    4735 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:02:46.253909    4735 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:02:46.253963    4735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:02:46.258480    4735 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:02:46.265485    4735 start.go:297] selected driver: qemu2
	I0815 17:02:46.265493    4735 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:02:46.265499    4735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:02:46.267861    4735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:02:46.271455    4735 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:02:46.274607    4735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:02:46.274639    4735 cni.go:84] Creating CNI manager for "kindnet"
	I0815 17:02:46.274648    4735 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:02:46.274677    4735 start.go:340] cluster config:
	{Name:kindnet-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:02:46.278827    4735 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:02:46.285482    4735 out.go:177] * Starting "kindnet-972000" primary control-plane node in "kindnet-972000" cluster
	I0815 17:02:46.289450    4735 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:02:46.289486    4735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:02:46.289496    4735 cache.go:56] Caching tarball of preloaded images
	I0815 17:02:46.289593    4735 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:02:46.289600    4735 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:02:46.289686    4735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kindnet-972000/config.json ...
	I0815 17:02:46.289702    4735 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kindnet-972000/config.json: {Name:mk1bcff3df34e0ecb22156f103057ab5fa87125a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:02:46.289913    4735 start.go:360] acquireMachinesLock for kindnet-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:02:46.289949    4735 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "kindnet-972000"
	I0815 17:02:46.289966    4735 start.go:93] Provisioning new machine with config: &{Name:kindnet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:02:46.289994    4735 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:02:46.299550    4735 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:02:46.316543    4735 start.go:159] libmachine.API.Create for "kindnet-972000" (driver="qemu2")
	I0815 17:02:46.316575    4735 client.go:168] LocalClient.Create starting
	I0815 17:02:46.316656    4735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:02:46.316688    4735 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:46.316698    4735 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:46.316741    4735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:02:46.316764    4735 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:46.316774    4735 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:46.317106    4735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:02:46.473058    4735 main.go:141] libmachine: Creating SSH key...
	I0815 17:02:46.519216    4735 main.go:141] libmachine: Creating Disk image...
	I0815 17:02:46.519222    4735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:02:46.519429    4735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2
	I0815 17:02:46.528922    4735 main.go:141] libmachine: STDOUT: 
	I0815 17:02:46.528941    4735 main.go:141] libmachine: STDERR: 
	I0815 17:02:46.528984    4735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2 +20000M
	I0815 17:02:46.536914    4735 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:02:46.536928    4735 main.go:141] libmachine: STDERR: 
	I0815 17:02:46.536957    4735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2
	I0815 17:02:46.536962    4735 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:02:46.536974    4735 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:02:46.537000    4735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:3b:f8:17:b6:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2
	I0815 17:02:46.538591    4735 main.go:141] libmachine: STDOUT: 
	I0815 17:02:46.538608    4735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:02:46.538625    4735 client.go:171] duration metric: took 222.04125ms to LocalClient.Create
	I0815 17:02:48.540755    4735 start.go:128] duration metric: took 2.250724083s to createHost
	I0815 17:02:48.540791    4735 start.go:83] releasing machines lock for "kindnet-972000", held for 2.250812625s
	W0815 17:02:48.540847    4735 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:02:48.550420    4735 out.go:177] * Deleting "kindnet-972000" in qemu2 ...
	W0815 17:02:48.562131    4735 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:02:48.562138    4735 start.go:729] Will try again in 5 seconds ...
	I0815 17:02:53.563889    4735 start.go:360] acquireMachinesLock for kindnet-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:02:53.564009    4735 start.go:364] duration metric: took 100µs to acquireMachinesLock for "kindnet-972000"
	I0815 17:02:53.564033    4735 start.go:93] Provisioning new machine with config: &{Name:kindnet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:02:53.564093    4735 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:02:53.571253    4735 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:02:53.587744    4735 start.go:159] libmachine.API.Create for "kindnet-972000" (driver="qemu2")
	I0815 17:02:53.587774    4735 client.go:168] LocalClient.Create starting
	I0815 17:02:53.587843    4735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:02:53.587877    4735 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:53.587885    4735 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:53.587925    4735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:02:53.587947    4735 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:53.587953    4735 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:53.588276    4735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:02:53.739422    4735 main.go:141] libmachine: Creating SSH key...
	I0815 17:02:53.821449    4735 main.go:141] libmachine: Creating Disk image...
	I0815 17:02:53.821455    4735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:02:53.821692    4735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2
	I0815 17:02:53.831128    4735 main.go:141] libmachine: STDOUT: 
	I0815 17:02:53.831148    4735 main.go:141] libmachine: STDERR: 
	I0815 17:02:53.831199    4735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2 +20000M
	I0815 17:02:53.839456    4735 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:02:53.839498    4735 main.go:141] libmachine: STDERR: 
	I0815 17:02:53.839511    4735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2
	I0815 17:02:53.839516    4735 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:02:53.839538    4735 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:02:53.839564    4735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f7:e1:77:e9:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kindnet-972000/disk.qcow2
	I0815 17:02:53.841343    4735 main.go:141] libmachine: STDOUT: 
	I0815 17:02:53.841361    4735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:02:53.841375    4735 client.go:171] duration metric: took 253.594875ms to LocalClient.Create
	I0815 17:02:55.843521    4735 start.go:128] duration metric: took 2.279457375s to createHost
	I0815 17:02:55.843601    4735 start.go:83] releasing machines lock for "kindnet-972000", held for 2.279642875s
	W0815 17:02:55.843999    4735 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:02:55.853833    4735 out.go:201] 
	W0815 17:02:55.861035    4735 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:02:55.861075    4735 out.go:270] * 
	* 
	W0815 17:02:55.862670    4735 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:02:55.871006    4735 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.889909417s)

                                                
                                                
-- stdout --
	* [flannel-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-972000" primary control-plane node in "flannel-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:02:58.100007    4848 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:02:58.100135    4848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:02:58.100138    4848 out.go:358] Setting ErrFile to fd 2...
	I0815 17:02:58.100141    4848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:02:58.100267    4848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:02:58.101332    4848 out.go:352] Setting JSON to false
	I0815 17:02:58.117650    4848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3746,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:02:58.117726    4848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:02:58.123650    4848 out.go:177] * [flannel-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:02:58.131623    4848 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:02:58.131677    4848 notify.go:220] Checking for updates...
	I0815 17:02:58.138494    4848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:02:58.141449    4848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:02:58.144512    4848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:02:58.147453    4848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:02:58.150423    4848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:02:58.153789    4848 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:02:58.153853    4848 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:02:58.153910    4848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:02:58.158337    4848 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:02:58.165428    4848 start.go:297] selected driver: qemu2
	I0815 17:02:58.165435    4848 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:02:58.165441    4848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:02:58.167459    4848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:02:58.170416    4848 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:02:58.173530    4848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:02:58.173555    4848 cni.go:84] Creating CNI manager for "flannel"
	I0815 17:02:58.173565    4848 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0815 17:02:58.173599    4848 start.go:340] cluster config:
	{Name:flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:02:58.176772    4848 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:02:58.184381    4848 out.go:177] * Starting "flannel-972000" primary control-plane node in "flannel-972000" cluster
	I0815 17:02:58.188407    4848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:02:58.188419    4848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:02:58.188427    4848 cache.go:56] Caching tarball of preloaded images
	I0815 17:02:58.188474    4848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:02:58.188479    4848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:02:58.188531    4848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/flannel-972000/config.json ...
	I0815 17:02:58.188541    4848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/flannel-972000/config.json: {Name:mka54dd80938a998b0f53ab80ee421f73da3f56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:02:58.188871    4848 start.go:360] acquireMachinesLock for flannel-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:02:58.188900    4848 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "flannel-972000"
	I0815 17:02:58.188912    4848 start.go:93] Provisioning new machine with config: &{Name:flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:02:58.188941    4848 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:02:58.197340    4848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:02:58.212502    4848 start.go:159] libmachine.API.Create for "flannel-972000" (driver="qemu2")
	I0815 17:02:58.212529    4848 client.go:168] LocalClient.Create starting
	I0815 17:02:58.212608    4848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:02:58.212639    4848 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:58.212650    4848 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:58.212688    4848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:02:58.212716    4848 main.go:141] libmachine: Decoding PEM data...
	I0815 17:02:58.212725    4848 main.go:141] libmachine: Parsing certificate...
	I0815 17:02:58.213128    4848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:02:58.381869    4848 main.go:141] libmachine: Creating SSH key...
	I0815 17:02:58.514609    4848 main.go:141] libmachine: Creating Disk image...
	I0815 17:02:58.514617    4848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:02:58.514818    4848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2
	I0815 17:02:58.523976    4848 main.go:141] libmachine: STDOUT: 
	I0815 17:02:58.523996    4848 main.go:141] libmachine: STDERR: 
	I0815 17:02:58.524051    4848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2 +20000M
	I0815 17:02:58.532036    4848 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:02:58.532052    4848 main.go:141] libmachine: STDERR: 
	I0815 17:02:58.532068    4848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2
	I0815 17:02:58.532073    4848 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:02:58.532086    4848 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:02:58.532120    4848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d3:1d:34:fb:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2
	I0815 17:02:58.533877    4848 main.go:141] libmachine: STDOUT: 
	I0815 17:02:58.533891    4848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:02:58.533910    4848 client.go:171] duration metric: took 321.87425ms to LocalClient.Create
	I0815 17:03:00.533430    4848 start.go:128] duration metric: took 2.347824042s to createHost
	I0815 17:03:00.533598    4848 start.go:83] releasing machines lock for "flannel-972000", held for 2.34807875s
	W0815 17:03:00.533659    4848 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:00.546128    4848 out.go:177] * Deleting "flannel-972000" in qemu2 ...
	W0815 17:03:00.570658    4848 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:00.570682    4848 start.go:729] Will try again in 5 seconds ...
	I0815 17:03:05.567199    4848 start.go:360] acquireMachinesLock for flannel-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:05.567691    4848 start.go:364] duration metric: took 401.542µs to acquireMachinesLock for "flannel-972000"
	I0815 17:03:05.567835    4848 start.go:93] Provisioning new machine with config: &{Name:flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:05.568170    4848 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:05.576730    4848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:05.625368    4848 start.go:159] libmachine.API.Create for "flannel-972000" (driver="qemu2")
	I0815 17:03:05.625413    4848 client.go:168] LocalClient.Create starting
	I0815 17:03:05.625524    4848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:05.625601    4848 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:05.625622    4848 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:05.625692    4848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:05.625737    4848 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:05.625751    4848 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:05.626423    4848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:05.801266    4848 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:05.887378    4848 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:05.887390    4848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:05.887612    4848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2
	I0815 17:03:05.897253    4848 main.go:141] libmachine: STDOUT: 
	I0815 17:03:05.897275    4848 main.go:141] libmachine: STDERR: 
	I0815 17:03:05.897326    4848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2 +20000M
	I0815 17:03:05.905725    4848 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:05.905742    4848 main.go:141] libmachine: STDERR: 
	I0815 17:03:05.905760    4848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2
	I0815 17:03:05.905765    4848 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:05.905775    4848 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:05.905801    4848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:8e:5d:af:26:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/flannel-972000/disk.qcow2
	I0815 17:03:05.907533    4848 main.go:141] libmachine: STDOUT: 
	I0815 17:03:05.907553    4848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:05.907573    4848 client.go:171] duration metric: took 282.423334ms to LocalClient.Create
	I0815 17:03:07.908024    4848 start.go:128] duration metric: took 2.341919s to createHost
	I0815 17:03:07.908087    4848 start.go:83] releasing machines lock for "flannel-972000", held for 2.342470583s
	W0815 17:03:07.908345    4848 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:07.919163    4848 out.go:201] 
	W0815 17:03:07.923234    4848 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:03:07.923251    4848 out.go:270] * 
	* 
	W0815 17:03:07.924906    4848 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:03:07.936976    4848 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.917569166s)

                                                
                                                
-- stdout --
	* [enable-default-cni-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-972000" primary control-plane node in "enable-default-cni-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:03:10.356612    4969 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:03:10.356764    4969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:10.356767    4969 out.go:358] Setting ErrFile to fd 2...
	I0815 17:03:10.356770    4969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:10.356909    4969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:03:10.358122    4969 out.go:352] Setting JSON to false
	I0815 17:03:10.375973    4969 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3758,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:03:10.376086    4969 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:03:10.380191    4969 out.go:177] * [enable-default-cni-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:03:10.387241    4969 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:03:10.387277    4969 notify.go:220] Checking for updates...
	I0815 17:03:10.395193    4969 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:03:10.398221    4969 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:03:10.401230    4969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:03:10.404187    4969 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:03:10.407230    4969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:03:10.410427    4969 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:03:10.410493    4969 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:03:10.410535    4969 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:03:10.415196    4969 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:03:10.422104    4969 start.go:297] selected driver: qemu2
	I0815 17:03:10.422111    4969 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:03:10.422117    4969 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:03:10.424288    4969 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:03:10.427167    4969 out.go:177] * Automatically selected the socket_vmnet network
	E0815 17:03:10.430262    4969 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0815 17:03:10.430274    4969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:03:10.430288    4969 cni.go:84] Creating CNI manager for "bridge"
	I0815 17:03:10.430292    4969 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:03:10.430320    4969 start.go:340] cluster config:
	{Name:enable-default-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:03:10.433855    4969 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:10.441169    4969 out.go:177] * Starting "enable-default-cni-972000" primary control-plane node in "enable-default-cni-972000" cluster
	I0815 17:03:10.445100    4969 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:03:10.445114    4969 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:03:10.445123    4969 cache.go:56] Caching tarball of preloaded images
	I0815 17:03:10.445177    4969 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:03:10.445183    4969 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:03:10.445243    4969 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/enable-default-cni-972000/config.json ...
	I0815 17:03:10.445253    4969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/enable-default-cni-972000/config.json: {Name:mk9de61b81a530ffc45c7d9b1849a333a1f6f7ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:03:10.445571    4969 start.go:360] acquireMachinesLock for enable-default-cni-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:10.445602    4969 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "enable-default-cni-972000"
	I0815 17:03:10.445614    4969 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:10.445643    4969 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:10.450161    4969 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:10.464986    4969 start.go:159] libmachine.API.Create for "enable-default-cni-972000" (driver="qemu2")
	I0815 17:03:10.465017    4969 client.go:168] LocalClient.Create starting
	I0815 17:03:10.465074    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:10.465105    4969 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:10.465112    4969 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:10.465148    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:10.465171    4969 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:10.465180    4969 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:10.465576    4969 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:10.617763    4969 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:10.854037    4969 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:10.854048    4969 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:10.854299    4969 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I0815 17:03:10.864532    4969 main.go:141] libmachine: STDOUT: 
	I0815 17:03:10.864558    4969 main.go:141] libmachine: STDERR: 
	I0815 17:03:10.864608    4969 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2 +20000M
	I0815 17:03:10.873086    4969 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:10.873106    4969 main.go:141] libmachine: STDERR: 
	I0815 17:03:10.873121    4969 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I0815 17:03:10.873132    4969 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:10.873144    4969 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:10.873174    4969 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:50:b2:5a:75:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I0815 17:03:10.874870    4969 main.go:141] libmachine: STDOUT: 
	I0815 17:03:10.874885    4969 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:10.874906    4969 client.go:171] duration metric: took 410.170958ms to LocalClient.Create
	I0815 17:03:12.875734    4969 start.go:128] duration metric: took 2.431664292s to createHost
	I0815 17:03:12.875756    4969 start.go:83] releasing machines lock for "enable-default-cni-972000", held for 2.431732166s
	W0815 17:03:12.875787    4969 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:12.879651    4969 out.go:177] * Deleting "enable-default-cni-972000" in qemu2 ...
	W0815 17:03:12.896486    4969 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:12.896495    4969 start.go:729] Will try again in 5 seconds ...
	I0815 17:03:17.896193    4969 start.go:360] acquireMachinesLock for enable-default-cni-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:17.896488    4969 start.go:364] duration metric: took 245.375µs to acquireMachinesLock for "enable-default-cni-972000"
	I0815 17:03:17.896562    4969 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:17.896676    4969 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:17.906114    4969 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:17.940325    4969 start.go:159] libmachine.API.Create for "enable-default-cni-972000" (driver="qemu2")
	I0815 17:03:17.940383    4969 client.go:168] LocalClient.Create starting
	I0815 17:03:17.940493    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:17.940545    4969 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:17.940561    4969 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:17.940625    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:17.940665    4969 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:17.940678    4969 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:17.941145    4969 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:18.097863    4969 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:18.179367    4969 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:18.179376    4969 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:18.179587    4969 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I0815 17:03:18.189184    4969 main.go:141] libmachine: STDOUT: 
	I0815 17:03:18.189203    4969 main.go:141] libmachine: STDERR: 
	I0815 17:03:18.189251    4969 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2 +20000M
	I0815 17:03:18.197222    4969 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:18.197235    4969 main.go:141] libmachine: STDERR: 
	I0815 17:03:18.197254    4969 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I0815 17:03:18.197259    4969 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:18.197269    4969 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:18.197301    4969 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:5c:de:ee:18:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I0815 17:03:18.198960    4969 main.go:141] libmachine: STDOUT: 
	I0815 17:03:18.198977    4969 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:18.198988    4969 client.go:171] duration metric: took 258.709542ms to LocalClient.Create
	I0815 17:03:20.200398    4969 start.go:128] duration metric: took 2.304606084s to createHost
	I0815 17:03:20.200472    4969 start.go:83] releasing machines lock for "enable-default-cni-972000", held for 2.304891875s
	W0815 17:03:20.200886    4969 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:20.210577    4969 out.go:201] 
	W0815 17:03:20.217549    4969 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:03:20.217584    4969 out.go:270] * 
	* 
	W0815 17:03:20.220245    4969 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:03:20.231474    4969 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.874532083s)

                                                
                                                
-- stdout --
	* [bridge-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-972000" primary control-plane node in "bridge-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:03:22.441077    5080 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:03:22.441218    5080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:22.441222    5080 out.go:358] Setting ErrFile to fd 2...
	I0815 17:03:22.441224    5080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:22.441360    5080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:03:22.442546    5080 out.go:352] Setting JSON to false
	I0815 17:03:22.459352    5080 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3770,"bootTime":1723762832,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:03:22.459414    5080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:03:22.464585    5080 out.go:177] * [bridge-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:03:22.472574    5080 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:03:22.472634    5080 notify.go:220] Checking for updates...
	I0815 17:03:22.479562    5080 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:03:22.482598    5080 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:03:22.485565    5080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:03:22.488562    5080 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:03:22.491467    5080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:03:22.494866    5080 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:03:22.494934    5080 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:03:22.494976    5080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:03:22.499510    5080 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:03:22.506561    5080 start.go:297] selected driver: qemu2
	I0815 17:03:22.506570    5080 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:03:22.506576    5080 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:03:22.508714    5080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:03:22.511590    5080 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:03:22.512996    5080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:03:22.513047    5080 cni.go:84] Creating CNI manager for "bridge"
	I0815 17:03:22.513054    5080 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:03:22.513084    5080 start.go:340] cluster config:
	{Name:bridge-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:03:22.516427    5080 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:22.523567    5080 out.go:177] * Starting "bridge-972000" primary control-plane node in "bridge-972000" cluster
	I0815 17:03:22.527533    5080 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:03:22.527549    5080 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:03:22.527564    5080 cache.go:56] Caching tarball of preloaded images
	I0815 17:03:22.527622    5080 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:03:22.527632    5080 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:03:22.527688    5080 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/bridge-972000/config.json ...
	I0815 17:03:22.527698    5080 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/bridge-972000/config.json: {Name:mkd8ac54454579fa83a33945760b8f7d4999a87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:03:22.528032    5080 start.go:360] acquireMachinesLock for bridge-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:22.528063    5080 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "bridge-972000"
	I0815 17:03:22.528074    5080 start.go:93] Provisioning new machine with config: &{Name:bridge-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:22.528103    5080 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:22.536415    5080 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:22.551872    5080 start.go:159] libmachine.API.Create for "bridge-972000" (driver="qemu2")
	I0815 17:03:22.551904    5080 client.go:168] LocalClient.Create starting
	I0815 17:03:22.551968    5080 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:22.552007    5080 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:22.552015    5080 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:22.552053    5080 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:22.552091    5080 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:22.552099    5080 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:22.552512    5080 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:22.703302    5080 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:22.782707    5080 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:22.782719    5080 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:22.782971    5080 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2
	I0815 17:03:22.792458    5080 main.go:141] libmachine: STDOUT: 
	I0815 17:03:22.792479    5080 main.go:141] libmachine: STDERR: 
	I0815 17:03:22.792530    5080 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2 +20000M
	I0815 17:03:22.800504    5080 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:22.800524    5080 main.go:141] libmachine: STDERR: 
	I0815 17:03:22.800553    5080 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2
	I0815 17:03:22.800557    5080 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:22.800569    5080 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:22.800594    5080 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:50:e5:1c:44:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2
	I0815 17:03:22.802234    5080 main.go:141] libmachine: STDOUT: 
	I0815 17:03:22.802250    5080 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:22.802268    5080 client.go:171] duration metric: took 250.440208ms to LocalClient.Create
	I0815 17:03:24.803798    5080 start.go:128] duration metric: took 2.276358083s to createHost
	I0815 17:03:24.803832    5080 start.go:83] releasing machines lock for "bridge-972000", held for 2.27644025s
	W0815 17:03:24.803883    5080 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:24.814104    5080 out.go:177] * Deleting "bridge-972000" in qemu2 ...
	W0815 17:03:24.832536    5080 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:24.832573    5080 start.go:729] Will try again in 5 seconds ...
	I0815 17:03:29.833654    5080 start.go:360] acquireMachinesLock for bridge-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:29.834143    5080 start.go:364] duration metric: took 381.042µs to acquireMachinesLock for "bridge-972000"
	I0815 17:03:29.834211    5080 start.go:93] Provisioning new machine with config: &{Name:bridge-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:29.834508    5080 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:29.839182    5080 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:29.880696    5080 start.go:159] libmachine.API.Create for "bridge-972000" (driver="qemu2")
	I0815 17:03:29.880756    5080 client.go:168] LocalClient.Create starting
	I0815 17:03:29.880865    5080 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:29.880944    5080 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:29.880957    5080 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:29.881021    5080 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:29.881060    5080 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:29.881076    5080 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:29.881671    5080 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:30.039128    5080 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:30.220434    5080 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:30.220443    5080 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:30.220704    5080 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2
	I0815 17:03:30.230255    5080 main.go:141] libmachine: STDOUT: 
	I0815 17:03:30.230277    5080 main.go:141] libmachine: STDERR: 
	I0815 17:03:30.230326    5080 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2 +20000M
	I0815 17:03:30.238267    5080 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:30.238284    5080 main.go:141] libmachine: STDERR: 
	I0815 17:03:30.238296    5080 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2
	I0815 17:03:30.238299    5080 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:30.238312    5080 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:30.238337    5080 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:44:c5:6f:25:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/bridge-972000/disk.qcow2
	I0815 17:03:30.240081    5080 main.go:141] libmachine: STDOUT: 
	I0815 17:03:30.240099    5080 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:30.240111    5080 client.go:171] duration metric: took 359.420334ms to LocalClient.Create
	I0815 17:03:32.241955    5080 start.go:128] duration metric: took 2.407841042s to createHost
	I0815 17:03:32.242036    5080 start.go:83] releasing machines lock for "bridge-972000", held for 2.408311375s
	W0815 17:03:32.242471    5080 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:32.252136    5080 out.go:201] 
	W0815 17:03:32.259355    5080 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:03:32.259388    5080 out.go:270] * 
	* 
	W0815 17:03:32.261765    5080 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:03:32.270219    5080 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0815 17:03:40.107890    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.794047916s)

                                                
                                                
-- stdout --
	* [kubenet-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-972000" primary control-plane node in "kubenet-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:03:34.508654    5190 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:03:34.508786    5190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:34.508790    5190 out.go:358] Setting ErrFile to fd 2...
	I0815 17:03:34.508792    5190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:34.508915    5190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:03:34.510019    5190 out.go:352] Setting JSON to false
	I0815 17:03:34.526565    5190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3782,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:03:34.526630    5190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:03:34.533501    5190 out.go:177] * [kubenet-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:03:34.541427    5190 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:03:34.541454    5190 notify.go:220] Checking for updates...
	I0815 17:03:34.547409    5190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:03:34.550471    5190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:03:34.551895    5190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:03:34.555430    5190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:03:34.558446    5190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:03:34.561849    5190 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:03:34.561919    5190 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:03:34.561967    5190 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:03:34.566389    5190 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:03:34.573404    5190 start.go:297] selected driver: qemu2
	I0815 17:03:34.573412    5190 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:03:34.573420    5190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:03:34.575843    5190 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:03:34.578437    5190 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:03:34.581542    5190 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:03:34.581574    5190 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0815 17:03:34.581614    5190 start.go:340] cluster config:
	{Name:kubenet-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:03:34.585236    5190 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:34.592382    5190 out.go:177] * Starting "kubenet-972000" primary control-plane node in "kubenet-972000" cluster
	I0815 17:03:34.596407    5190 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:03:34.596424    5190 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:03:34.596437    5190 cache.go:56] Caching tarball of preloaded images
	I0815 17:03:34.596497    5190 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:03:34.596502    5190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:03:34.596578    5190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kubenet-972000/config.json ...
	I0815 17:03:34.596591    5190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/kubenet-972000/config.json: {Name:mkde6ae0da2e642ecf52fc5ce8ee403a229f4ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:03:34.596926    5190 start.go:360] acquireMachinesLock for kubenet-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:34.596957    5190 start.go:364] duration metric: took 25.916µs to acquireMachinesLock for "kubenet-972000"
	I0815 17:03:34.596969    5190 start.go:93] Provisioning new machine with config: &{Name:kubenet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:34.597002    5190 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:34.601397    5190 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:34.617425    5190 start.go:159] libmachine.API.Create for "kubenet-972000" (driver="qemu2")
	I0815 17:03:34.617449    5190 client.go:168] LocalClient.Create starting
	I0815 17:03:34.617507    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:34.617535    5190 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:34.617544    5190 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:34.617583    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:34.617606    5190 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:34.617615    5190 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:34.617918    5190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:34.769218    5190 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:34.858108    5190 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:34.858118    5190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:34.858354    5190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2
	I0815 17:03:34.867998    5190 main.go:141] libmachine: STDOUT: 
	I0815 17:03:34.868020    5190 main.go:141] libmachine: STDERR: 
	I0815 17:03:34.868077    5190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2 +20000M
	I0815 17:03:34.877170    5190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:34.877191    5190 main.go:141] libmachine: STDERR: 
	I0815 17:03:34.877217    5190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2
	I0815 17:03:34.877221    5190 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:34.877231    5190 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:34.877257    5190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:87:04:a7:00:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2
	I0815 17:03:34.879096    5190 main.go:141] libmachine: STDOUT: 
	I0815 17:03:34.879111    5190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:34.879127    5190 client.go:171] duration metric: took 261.711959ms to LocalClient.Create
	I0815 17:03:36.881032    5190 start.go:128] duration metric: took 2.284310166s to createHost
	I0815 17:03:36.881103    5190 start.go:83] releasing machines lock for "kubenet-972000", held for 2.284445583s
	W0815 17:03:36.881189    5190 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:36.886657    5190 out.go:177] * Deleting "kubenet-972000" in qemu2 ...
	W0815 17:03:36.909104    5190 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:36.909122    5190 start.go:729] Will try again in 5 seconds ...
	I0815 17:03:41.910866    5190 start.go:360] acquireMachinesLock for kubenet-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:41.911401    5190 start.go:364] duration metric: took 444.292µs to acquireMachinesLock for "kubenet-972000"
	I0815 17:03:41.911467    5190 start.go:93] Provisioning new machine with config: &{Name:kubenet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:41.911637    5190 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:41.920303    5190 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:41.960943    5190 start.go:159] libmachine.API.Create for "kubenet-972000" (driver="qemu2")
	I0815 17:03:41.961000    5190 client.go:168] LocalClient.Create starting
	I0815 17:03:41.961101    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:41.961179    5190 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:41.961193    5190 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:41.961246    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:41.961285    5190 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:41.961298    5190 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:41.961899    5190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:42.119630    5190 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:42.211283    5190 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:42.211289    5190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:42.211504    5190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2
	I0815 17:03:42.220847    5190 main.go:141] libmachine: STDOUT: 
	I0815 17:03:42.220863    5190 main.go:141] libmachine: STDERR: 
	I0815 17:03:42.220923    5190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2 +20000M
	I0815 17:03:42.228840    5190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:42.228872    5190 main.go:141] libmachine: STDERR: 
	I0815 17:03:42.228884    5190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2
	I0815 17:03:42.228888    5190 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:42.228898    5190 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:42.228932    5190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d6:47:27:22:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/kubenet-972000/disk.qcow2
	I0815 17:03:42.230592    5190 main.go:141] libmachine: STDOUT: 
	I0815 17:03:42.230607    5190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:42.230620    5190 client.go:171] duration metric: took 269.639458ms to LocalClient.Create
	I0815 17:03:44.232670    5190 start.go:128] duration metric: took 2.321179959s to createHost
	I0815 17:03:44.232746    5190 start.go:83] releasing machines lock for "kubenet-972000", held for 2.321516417s
	W0815 17:03:44.233181    5190 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:44.242760    5190 out.go:201] 
	W0815 17:03:44.249902    5190 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:03:44.249935    5190 out.go:270] * 
	* 
	W0815 17:03:44.252759    5190 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:03:44.260844    5190 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.754961958s)

                                                
                                                
-- stdout --
	* [custom-flannel-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-972000" primary control-plane node in "custom-flannel-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:03:46.464189    5299 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:03:46.464302    5299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:46.464305    5299 out.go:358] Setting ErrFile to fd 2...
	I0815 17:03:46.464308    5299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:46.464452    5299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:03:46.465510    5299 out.go:352] Setting JSON to false
	I0815 17:03:46.481629    5299 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3794,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:03:46.481695    5299 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:03:46.488205    5299 out.go:177] * [custom-flannel-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:03:46.496185    5299 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:03:46.496226    5299 notify.go:220] Checking for updates...
	I0815 17:03:46.504191    5299 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:03:46.507124    5299 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:03:46.510157    5299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:03:46.513198    5299 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:03:46.516197    5299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:03:46.519511    5299 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:03:46.519577    5299 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:03:46.519631    5299 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:03:46.524164    5299 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:03:46.531166    5299 start.go:297] selected driver: qemu2
	I0815 17:03:46.531172    5299 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:03:46.531177    5299 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:03:46.533258    5299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:03:46.536196    5299 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:03:46.539187    5299 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:03:46.539224    5299 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0815 17:03:46.539233    5299 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0815 17:03:46.539283    5299 start.go:340] cluster config:
	{Name:custom-flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:03:46.542778    5299 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:46.550165    5299 out.go:177] * Starting "custom-flannel-972000" primary control-plane node in "custom-flannel-972000" cluster
	I0815 17:03:46.554161    5299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:03:46.554173    5299 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:03:46.554182    5299 cache.go:56] Caching tarball of preloaded images
	I0815 17:03:46.554232    5299 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:03:46.554237    5299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:03:46.554292    5299 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/custom-flannel-972000/config.json ...
	I0815 17:03:46.554302    5299 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/custom-flannel-972000/config.json: {Name:mk826c99c0f578dd3daf9e684837658da25431c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:03:46.554551    5299 start.go:360] acquireMachinesLock for custom-flannel-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:46.554582    5299 start.go:364] duration metric: took 24.291µs to acquireMachinesLock for "custom-flannel-972000"
	I0815 17:03:46.554594    5299 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:46.554623    5299 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:46.562130    5299 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:46.577048    5299 start.go:159] libmachine.API.Create for "custom-flannel-972000" (driver="qemu2")
	I0815 17:03:46.577074    5299 client.go:168] LocalClient.Create starting
	I0815 17:03:46.577138    5299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:46.577168    5299 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:46.577176    5299 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:46.577220    5299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:46.577243    5299 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:46.577251    5299 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:46.577644    5299 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:46.729673    5299 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:46.815519    5299 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:46.815526    5299 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:46.815746    5299 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2
	I0815 17:03:46.825017    5299 main.go:141] libmachine: STDOUT: 
	I0815 17:03:46.825039    5299 main.go:141] libmachine: STDERR: 
	I0815 17:03:46.825088    5299 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2 +20000M
	I0815 17:03:46.833619    5299 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:46.833637    5299 main.go:141] libmachine: STDERR: 
	I0815 17:03:46.833670    5299 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2
	I0815 17:03:46.833674    5299 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:46.833691    5299 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:46.833716    5299 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d7:a9:b2:71:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2
	I0815 17:03:46.835569    5299 main.go:141] libmachine: STDOUT: 
	I0815 17:03:46.835586    5299 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:46.835603    5299 client.go:171] duration metric: took 258.540459ms to LocalClient.Create
	I0815 17:03:48.836257    5299 start.go:128] duration metric: took 2.281765666s to createHost
	I0815 17:03:48.836277    5299 start.go:83] releasing machines lock for "custom-flannel-972000", held for 2.281828167s
	W0815 17:03:48.836291    5299 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:48.839913    5299 out.go:177] * Deleting "custom-flannel-972000" in qemu2 ...
	W0815 17:03:48.854599    5299 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:48.854606    5299 start.go:729] Will try again in 5 seconds ...
	I0815 17:03:53.856573    5299 start.go:360] acquireMachinesLock for custom-flannel-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:53.856868    5299 start.go:364] duration metric: took 226.542µs to acquireMachinesLock for "custom-flannel-972000"
	I0815 17:03:53.856899    5299 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:53.857003    5299 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:53.866328    5299 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:53.900516    5299 start.go:159] libmachine.API.Create for "custom-flannel-972000" (driver="qemu2")
	I0815 17:03:53.900571    5299 client.go:168] LocalClient.Create starting
	I0815 17:03:53.900686    5299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:53.900750    5299 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:53.900765    5299 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:53.900833    5299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:53.900873    5299 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:53.900887    5299 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:53.901348    5299 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:54.059525    5299 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:54.126400    5299 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:54.126407    5299 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:54.126630    5299 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2
	I0815 17:03:54.135932    5299 main.go:141] libmachine: STDOUT: 
	I0815 17:03:54.135949    5299 main.go:141] libmachine: STDERR: 
	I0815 17:03:54.136003    5299 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2 +20000M
	I0815 17:03:54.144094    5299 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:54.144121    5299 main.go:141] libmachine: STDERR: 
	I0815 17:03:54.144134    5299 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2
	I0815 17:03:54.144139    5299 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:54.144150    5299 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:54.144177    5299 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:52:aa:57:5f:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/custom-flannel-972000/disk.qcow2
	I0815 17:03:54.145886    5299 main.go:141] libmachine: STDOUT: 
	I0815 17:03:54.145903    5299 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:54.145916    5299 client.go:171] duration metric: took 245.35ms to LocalClient.Create
	I0815 17:03:56.148027    5299 start.go:128] duration metric: took 2.291070958s to createHost
	I0815 17:03:56.148218    5299 start.go:83] releasing machines lock for "custom-flannel-972000", held for 2.291319333s
	W0815 17:03:56.148514    5299 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:03:56.157850    5299 out.go:201] 
	W0815 17:03:56.164101    5299 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:03:56.164148    5299 out.go:270] * 
	* 
	W0815 17:03:56.166094    5299 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:03:56.176037    5299 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.992673042s)

                                                
                                                
-- stdout --
	* [calico-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-972000" primary control-plane node in "calico-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:03:58.612400    5416 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:03:58.612534    5416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:58.612537    5416 out.go:358] Setting ErrFile to fd 2...
	I0815 17:03:58.612539    5416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:58.612680    5416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:03:58.613729    5416 out.go:352] Setting JSON to false
	I0815 17:03:58.630783    5416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3806,"bootTime":1723762832,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:03:58.630856    5416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:03:58.637346    5416 out.go:177] * [calico-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:03:58.644346    5416 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:03:58.644398    5416 notify.go:220] Checking for updates...
	I0815 17:03:58.649716    5416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:03:58.653270    5416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:03:58.656309    5416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:03:58.659256    5416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:03:58.662276    5416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:03:58.665665    5416 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:03:58.665742    5416 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:03:58.665800    5416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:03:58.670276    5416 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:03:58.677282    5416 start.go:297] selected driver: qemu2
	I0815 17:03:58.677293    5416 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:03:58.677302    5416 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:03:58.679671    5416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:03:58.683284    5416 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:03:58.686324    5416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:03:58.686364    5416 cni.go:84] Creating CNI manager for "calico"
	I0815 17:03:58.686369    5416 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0815 17:03:58.686407    5416 start.go:340] cluster config:
	{Name:calico-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:03:58.690037    5416 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:58.697139    5416 out.go:177] * Starting "calico-972000" primary control-plane node in "calico-972000" cluster
	I0815 17:03:58.701163    5416 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:03:58.701177    5416 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:03:58.701186    5416 cache.go:56] Caching tarball of preloaded images
	I0815 17:03:58.701241    5416 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:03:58.701252    5416 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:03:58.701315    5416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/calico-972000/config.json ...
	I0815 17:03:58.701326    5416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/calico-972000/config.json: {Name:mk248177b63fa94ade03e8f74e4f5a5c931da106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:03:58.701665    5416 start.go:360] acquireMachinesLock for calico-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:58.701700    5416 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "calico-972000"
	I0815 17:03:58.701712    5416 start.go:93] Provisioning new machine with config: &{Name:calico-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:58.701749    5416 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:03:58.710213    5416 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:03:58.728196    5416 start.go:159] libmachine.API.Create for "calico-972000" (driver="qemu2")
	I0815 17:03:58.728232    5416 client.go:168] LocalClient.Create starting
	I0815 17:03:58.728297    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:03:58.728327    5416 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:58.728340    5416 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:58.728382    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:03:58.728403    5416 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:58.728414    5416 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:58.728890    5416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:03:58.879892    5416 main.go:141] libmachine: Creating SSH key...
	I0815 17:03:59.048968    5416 main.go:141] libmachine: Creating Disk image...
	I0815 17:03:59.048979    5416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:03:59.049237    5416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2
	I0815 17:03:59.058979    5416 main.go:141] libmachine: STDOUT: 
	I0815 17:03:59.059014    5416 main.go:141] libmachine: STDERR: 
	I0815 17:03:59.059071    5416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2 +20000M
	I0815 17:03:59.067151    5416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:03:59.067176    5416 main.go:141] libmachine: STDERR: 
	I0815 17:03:59.067192    5416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2
	I0815 17:03:59.067198    5416 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:03:59.067209    5416 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:03:59.067234    5416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:03:b1:3f:c3:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2
	I0815 17:03:59.068932    5416 main.go:141] libmachine: STDOUT: 
	I0815 17:03:59.068948    5416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:03:59.068966    5416 client.go:171] duration metric: took 340.738042ms to LocalClient.Create
	I0815 17:04:01.071017    5416 start.go:128] duration metric: took 2.369311709s to createHost
	I0815 17:04:01.071044    5416 start.go:83] releasing machines lock for "calico-972000", held for 2.369397958s
	W0815 17:04:01.071082    5416 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:01.075365    5416 out.go:177] * Deleting "calico-972000" in qemu2 ...
	W0815 17:04:01.091917    5416 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:01.091933    5416 start.go:729] Will try again in 5 seconds ...
	I0815 17:04:06.094091    5416 start.go:360] acquireMachinesLock for calico-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:06.094674    5416 start.go:364] duration metric: took 458.625µs to acquireMachinesLock for "calico-972000"
	I0815 17:04:06.094847    5416 start.go:93] Provisioning new machine with config: &{Name:calico-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:06.095175    5416 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:06.103159    5416 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:04:06.151854    5416 start.go:159] libmachine.API.Create for "calico-972000" (driver="qemu2")
	I0815 17:04:06.151910    5416 client.go:168] LocalClient.Create starting
	I0815 17:04:06.152046    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:06.152107    5416 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:06.152121    5416 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:06.152207    5416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:06.152252    5416 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:06.152263    5416 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:06.152764    5416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:06.315215    5416 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:06.515789    5416 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:06.515805    5416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:06.516092    5416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2
	I0815 17:04:06.526021    5416 main.go:141] libmachine: STDOUT: 
	I0815 17:04:06.526045    5416 main.go:141] libmachine: STDERR: 
	I0815 17:04:06.526120    5416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2 +20000M
	I0815 17:04:06.534394    5416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:06.534411    5416 main.go:141] libmachine: STDERR: 
	I0815 17:04:06.534426    5416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2
	I0815 17:04:06.534432    5416 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:06.534445    5416 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:06.534471    5416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:35:38:e4:66:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/calico-972000/disk.qcow2
	I0815 17:04:06.536502    5416 main.go:141] libmachine: STDOUT: 
	I0815 17:04:06.536695    5416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:06.536720    5416 client.go:171] duration metric: took 384.806375ms to LocalClient.Create
	I0815 17:04:08.538983    5416 start.go:128] duration metric: took 2.443754459s to createHost
	I0815 17:04:08.539092    5416 start.go:83] releasing machines lock for "calico-972000", held for 2.444418791s
	W0815 17:04:08.539449    5416 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:08.548975    5416 out.go:201] 
	W0815 17:04:08.554877    5416 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:08.554894    5416 out.go:270] * 
	* 
	W0815 17:04:08.556822    5416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:04:08.563819    5416 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.164876833s)

                                                
                                                
-- stdout --
	* [false-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-972000" primary control-plane node in "false-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:04:10.956740    5536 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:10.956876    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:10.956879    5536 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:10.956882    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:10.957008    5536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:04:10.958096    5536 out.go:352] Setting JSON to false
	I0815 17:04:10.974441    5536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3818,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:04:10.974507    5536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:04:10.979725    5536 out.go:177] * [false-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:04:10.987563    5536 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:04:10.987593    5536 notify.go:220] Checking for updates...
	I0815 17:04:10.994602    5536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:04:10.997634    5536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:04:11.000618    5536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:04:11.001950    5536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:04:11.004630    5536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:04:11.007993    5536 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:04:11.008068    5536 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:04:11.008115    5536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:04:11.012413    5536 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:04:11.019649    5536 start.go:297] selected driver: qemu2
	I0815 17:04:11.019659    5536 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:04:11.019666    5536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:04:11.021932    5536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:04:11.024546    5536 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:04:11.027662    5536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:04:11.027681    5536 cni.go:84] Creating CNI manager for "false"
	I0815 17:04:11.027702    5536 start.go:340] cluster config:
	{Name:false-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:11.031002    5536 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:11.037561    5536 out.go:177] * Starting "false-972000" primary control-plane node in "false-972000" cluster
	I0815 17:04:11.041591    5536 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:04:11.041606    5536 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:04:11.041613    5536 cache.go:56] Caching tarball of preloaded images
	I0815 17:04:11.041675    5536 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:04:11.041680    5536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:04:11.041740    5536 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/false-972000/config.json ...
	I0815 17:04:11.041751    5536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/false-972000/config.json: {Name:mka26d05ea5852610931c767b6734e54dcd0a418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:04:11.041964    5536 start.go:360] acquireMachinesLock for false-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:11.041995    5536 start.go:364] duration metric: took 25µs to acquireMachinesLock for "false-972000"
	I0815 17:04:11.042007    5536 start.go:93] Provisioning new machine with config: &{Name:false-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:11.042030    5536 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:11.053598    5536 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:04:11.068529    5536 start.go:159] libmachine.API.Create for "false-972000" (driver="qemu2")
	I0815 17:04:11.068557    5536 client.go:168] LocalClient.Create starting
	I0815 17:04:11.068619    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:11.068649    5536 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:11.068664    5536 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:11.068706    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:11.068728    5536 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:11.068738    5536 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:11.069131    5536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:11.222484    5536 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:11.529298    5536 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:11.529308    5536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:11.529571    5536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2
	I0815 17:04:11.539732    5536 main.go:141] libmachine: STDOUT: 
	I0815 17:04:11.539754    5536 main.go:141] libmachine: STDERR: 
	I0815 17:04:11.539814    5536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2 +20000M
	I0815 17:04:11.548315    5536 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:11.548335    5536 main.go:141] libmachine: STDERR: 
	I0815 17:04:11.548357    5536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2
	I0815 17:04:11.548362    5536 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:11.548377    5536 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:11.548406    5536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e4:e7:fa:da:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2
	I0815 17:04:11.550308    5536 main.go:141] libmachine: STDOUT: 
	I0815 17:04:11.550327    5536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:11.550349    5536 client.go:171] duration metric: took 481.791833ms to LocalClient.Create
	I0815 17:04:13.552694    5536 start.go:128] duration metric: took 2.510652917s to createHost
	I0815 17:04:13.552798    5536 start.go:83] releasing machines lock for "false-972000", held for 2.510816583s
	W0815 17:04:13.552901    5536 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:13.563929    5536 out.go:177] * Deleting "false-972000" in qemu2 ...
	W0815 17:04:13.595877    5536 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:13.595910    5536 start.go:729] Will try again in 5 seconds ...
	I0815 17:04:18.598087    5536 start.go:360] acquireMachinesLock for false-972000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:18.598468    5536 start.go:364] duration metric: took 285.458µs to acquireMachinesLock for "false-972000"
	I0815 17:04:18.598603    5536 start.go:93] Provisioning new machine with config: &{Name:false-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:18.598796    5536 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:18.608343    5536 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 17:04:18.646831    5536 start.go:159] libmachine.API.Create for "false-972000" (driver="qemu2")
	I0815 17:04:18.646884    5536 client.go:168] LocalClient.Create starting
	I0815 17:04:18.647006    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:18.647095    5536 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:18.647145    5536 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:18.647209    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:18.647260    5536 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:18.647271    5536 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:18.647763    5536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:18.807804    5536 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:19.028407    5536 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:19.028419    5536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:19.028666    5536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2
	I0815 17:04:19.038587    5536 main.go:141] libmachine: STDOUT: 
	I0815 17:04:19.038614    5536 main.go:141] libmachine: STDERR: 
	I0815 17:04:19.038684    5536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2 +20000M
	I0815 17:04:19.046921    5536 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:19.046936    5536 main.go:141] libmachine: STDERR: 
	I0815 17:04:19.046955    5536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2
	I0815 17:04:19.046961    5536 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:19.046971    5536 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:19.047096    5536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:10:34:15:87:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/false-972000/disk.qcow2
	I0815 17:04:19.048770    5536 main.go:141] libmachine: STDOUT: 
	I0815 17:04:19.048931    5536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:19.048946    5536 client.go:171] duration metric: took 402.056709ms to LocalClient.Create
	I0815 17:04:21.051142    5536 start.go:128] duration metric: took 2.452325375s to createHost
	I0815 17:04:21.051208    5536 start.go:83] releasing machines lock for "false-972000", held for 2.452715916s
	W0815 17:04:21.051624    5536 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:21.063395    5536 out.go:201] 
	W0815 17:04:21.066427    5536 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:21.066472    5536 out.go:270] * 
	* 
	W0815 17:04:21.069041    5536 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:04:21.079386    5536 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-250000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-250000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.814951166s)

                                                
                                                
-- stdout --
	* [old-k8s-version-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-250000" primary control-plane node in "old-k8s-version-250000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-250000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:04:23.260188    5647 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:23.260330    5647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:23.260334    5647 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:23.260336    5647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:23.260463    5647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:04:23.261610    5647 out.go:352] Setting JSON to false
	I0815 17:04:23.278718    5647 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3831,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:04:23.278783    5647 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:04:23.283179    5647 out.go:177] * [old-k8s-version-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:04:23.291076    5647 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:04:23.291084    5647 notify.go:220] Checking for updates...
	I0815 17:04:23.298166    5647 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:04:23.301092    5647 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:04:23.304140    5647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:04:23.307201    5647 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:04:23.310122    5647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:04:23.313420    5647 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:04:23.313487    5647 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:04:23.313529    5647 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:04:23.317172    5647 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:04:23.325224    5647 start.go:297] selected driver: qemu2
	I0815 17:04:23.325231    5647 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:04:23.325236    5647 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:04:23.327480    5647 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:04:23.331979    5647 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:04:23.336252    5647 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:04:23.336282    5647 cni.go:84] Creating CNI manager for ""
	I0815 17:04:23.336288    5647 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 17:04:23.336312    5647 start.go:340] cluster config:
	{Name:old-k8s-version-250000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:23.339845    5647 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:23.348173    5647 out.go:177] * Starting "old-k8s-version-250000" primary control-plane node in "old-k8s-version-250000" cluster
	I0815 17:04:23.352132    5647 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 17:04:23.352147    5647 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 17:04:23.352154    5647 cache.go:56] Caching tarball of preloaded images
	I0815 17:04:23.352204    5647 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:04:23.352209    5647 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 17:04:23.352268    5647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/old-k8s-version-250000/config.json ...
	I0815 17:04:23.352278    5647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/old-k8s-version-250000/config.json: {Name:mkcad49a197abaebcfe1a35f87c03a507e2b554f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:04:23.352558    5647 start.go:360] acquireMachinesLock for old-k8s-version-250000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:23.352588    5647 start.go:364] duration metric: took 23.083µs to acquireMachinesLock for "old-k8s-version-250000"
	I0815 17:04:23.352598    5647 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:23.352622    5647 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:23.356107    5647 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:04:23.371217    5647 start.go:159] libmachine.API.Create for "old-k8s-version-250000" (driver="qemu2")
	I0815 17:04:23.371250    5647 client.go:168] LocalClient.Create starting
	I0815 17:04:23.371315    5647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:23.371347    5647 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:23.371359    5647 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:23.371400    5647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:23.371428    5647 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:23.371434    5647 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:23.371876    5647 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:23.521070    5647 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:23.563150    5647 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:23.563157    5647 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:23.563385    5647 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:23.572824    5647 main.go:141] libmachine: STDOUT: 
	I0815 17:04:23.572847    5647 main.go:141] libmachine: STDERR: 
	I0815 17:04:23.572900    5647 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2 +20000M
	I0815 17:04:23.580941    5647 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:23.580957    5647 main.go:141] libmachine: STDERR: 
	I0815 17:04:23.580974    5647 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:23.580977    5647 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:23.580989    5647 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:23.581031    5647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:35:5e:6c:8a:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:23.582682    5647 main.go:141] libmachine: STDOUT: 
	I0815 17:04:23.582701    5647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:23.582720    5647 client.go:171] duration metric: took 211.464875ms to LocalClient.Create
	I0815 17:04:25.584405    5647 start.go:128] duration metric: took 2.231776875s to createHost
	I0815 17:04:25.584447    5647 start.go:83] releasing machines lock for "old-k8s-version-250000", held for 2.231859708s
	W0815 17:04:25.584492    5647 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:25.598431    5647 out.go:177] * Deleting "old-k8s-version-250000" in qemu2 ...
	W0815 17:04:25.613780    5647 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:25.613790    5647 start.go:729] Will try again in 5 seconds ...
	I0815 17:04:30.616091    5647 start.go:360] acquireMachinesLock for old-k8s-version-250000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:30.616714    5647 start.go:364] duration metric: took 481.875µs to acquireMachinesLock for "old-k8s-version-250000"
	I0815 17:04:30.616996    5647 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:30.617272    5647 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:30.626998    5647 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:04:30.677506    5647 start.go:159] libmachine.API.Create for "old-k8s-version-250000" (driver="qemu2")
	I0815 17:04:30.677561    5647 client.go:168] LocalClient.Create starting
	I0815 17:04:30.677672    5647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:30.677741    5647 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:30.677759    5647 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:30.677818    5647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:30.677863    5647 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:30.677874    5647 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:30.678419    5647 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:30.840950    5647 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:30.975326    5647 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:30.975338    5647 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:30.975568    5647 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:30.985380    5647 main.go:141] libmachine: STDOUT: 
	I0815 17:04:30.985406    5647 main.go:141] libmachine: STDERR: 
	I0815 17:04:30.985456    5647 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2 +20000M
	I0815 17:04:30.993973    5647 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:30.993990    5647 main.go:141] libmachine: STDERR: 
	I0815 17:04:30.994013    5647 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:30.994017    5647 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:30.994030    5647 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:30.994063    5647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:17:50:bb:eb:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:30.995840    5647 main.go:141] libmachine: STDOUT: 
	I0815 17:04:30.995860    5647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:30.995876    5647 client.go:171] duration metric: took 318.309042ms to LocalClient.Create
	I0815 17:04:32.998052    5647 start.go:128] duration metric: took 2.380718542s to createHost
	I0815 17:04:32.998121    5647 start.go:83] releasing machines lock for "old-k8s-version-250000", held for 2.381383375s
	W0815 17:04:32.998467    5647 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:33.012029    5647 out.go:201] 
	W0815 17:04:33.016130    5647 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:33.016151    5647 out.go:270] * 
	* 
	W0815 17:04:33.017960    5647 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:04:33.032079    5647 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-250000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (64.746708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-250000 create -f testdata/busybox.yaml: exit status 1 (30.005959ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-250000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-250000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (28.998291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (29.588917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-250000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-250000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-250000 describe deploy/metrics-server -n kube-system: exit status 1 (27.1405ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-250000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-250000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (33.013708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-250000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-250000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.18797725s)

                                                
                                                
-- stdout --
	* [old-k8s-version-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-250000" primary control-plane node in "old-k8s-version-250000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-250000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-250000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:04:37.071726    5700 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:37.071867    5700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:37.071871    5700 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:37.071873    5700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:37.072005    5700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:04:37.073130    5700 out.go:352] Setting JSON to false
	I0815 17:04:37.089587    5700 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3845,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:04:37.089652    5700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:04:37.094837    5700 out.go:177] * [old-k8s-version-250000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:04:37.101718    5700 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:04:37.101760    5700 notify.go:220] Checking for updates...
	I0815 17:04:37.108802    5700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:04:37.110241    5700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:04:37.112839    5700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:04:37.115849    5700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:04:37.118880    5700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:04:37.122213    5700 config.go:182] Loaded profile config "old-k8s-version-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0815 17:04:37.125827    5700 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 17:04:37.128850    5700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:04:37.132742    5700 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 17:04:37.139777    5700 start.go:297] selected driver: qemu2
	I0815 17:04:37.139783    5700 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-250000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:37.139837    5700 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:04:37.142265    5700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:04:37.142293    5700 cni.go:84] Creating CNI manager for ""
	I0815 17:04:37.142301    5700 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 17:04:37.142334    5700 start.go:340] cluster config:
	{Name:old-k8s-version-250000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-250000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:37.145958    5700 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:37.153828    5700 out.go:177] * Starting "old-k8s-version-250000" primary control-plane node in "old-k8s-version-250000" cluster
	I0815 17:04:37.157816    5700 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 17:04:37.157830    5700 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 17:04:37.157838    5700 cache.go:56] Caching tarball of preloaded images
	I0815 17:04:37.157890    5700 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:04:37.157896    5700 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 17:04:37.157952    5700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/old-k8s-version-250000/config.json ...
	I0815 17:04:37.158427    5700 start.go:360] acquireMachinesLock for old-k8s-version-250000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:37.158460    5700 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "old-k8s-version-250000"
	I0815 17:04:37.158470    5700 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:04:37.158476    5700 fix.go:54] fixHost starting: 
	I0815 17:04:37.158591    5700 fix.go:112] recreateIfNeeded on old-k8s-version-250000: state=Stopped err=<nil>
	W0815 17:04:37.158598    5700 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:04:37.161849    5700 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-250000" ...
	I0815 17:04:37.169811    5700 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:37.169846    5700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:17:50:bb:eb:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:37.171887    5700 main.go:141] libmachine: STDOUT: 
	I0815 17:04:37.171907    5700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:37.171936    5700 fix.go:56] duration metric: took 13.459708ms for fixHost
	I0815 17:04:37.171940    5700 start.go:83] releasing machines lock for "old-k8s-version-250000", held for 13.476416ms
	W0815 17:04:37.171946    5700 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:37.171992    5700 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:37.171997    5700 start.go:729] Will try again in 5 seconds ...
	I0815 17:04:42.174220    5700 start.go:360] acquireMachinesLock for old-k8s-version-250000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:42.174821    5700 start.go:364] duration metric: took 456.875µs to acquireMachinesLock for "old-k8s-version-250000"
	I0815 17:04:42.174908    5700 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:04:42.174928    5700 fix.go:54] fixHost starting: 
	I0815 17:04:42.175634    5700 fix.go:112] recreateIfNeeded on old-k8s-version-250000: state=Stopped err=<nil>
	W0815 17:04:42.175661    5700 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:04:42.184338    5700 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-250000" ...
	I0815 17:04:42.188356    5700 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:42.188621    5700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:17:50:bb:eb:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/old-k8s-version-250000/disk.qcow2
	I0815 17:04:42.198122    5700 main.go:141] libmachine: STDOUT: 
	I0815 17:04:42.198212    5700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:42.198296    5700 fix.go:56] duration metric: took 23.371125ms for fixHost
	I0815 17:04:42.198315    5700 start.go:83] releasing machines lock for "old-k8s-version-250000", held for 23.471ms
	W0815 17:04:42.198478    5700 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-250000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:42.206361    5700 out.go:201] 
	W0815 17:04:42.210368    5700 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:42.210414    5700 out.go:270] * 
	* 
	W0815 17:04:42.213305    5700 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:04:42.220308    5700 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-250000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (65.364042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-250000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (31.944542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-250000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-250000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-250000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.145916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-250000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-250000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (29.869625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-250000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (29.674458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-250000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-250000 --alsologtostderr -v=1: exit status 83 (42.814625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-250000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-250000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:04:42.491519    5719 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:42.492545    5719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:42.492553    5719 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:42.492556    5719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:42.492726    5719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:04:42.492922    5719 out.go:352] Setting JSON to false
	I0815 17:04:42.492932    5719 mustload.go:65] Loading cluster: old-k8s-version-250000
	I0815 17:04:42.493124    5719 config.go:182] Loaded profile config "old-k8s-version-250000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0815 17:04:42.497845    5719 out.go:177] * The control-plane node old-k8s-version-250000 host is not running: state=Stopped
	I0815 17:04:42.500813    5719 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-250000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-250000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (29.789833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (29.233333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-250000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.888862334s)

                                                
                                                
-- stdout --
	* [no-preload-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-066000" primary control-plane node in "no-preload-066000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:04:42.810694    5736 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:42.810836    5736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:42.810840    5736 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:42.810842    5736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:42.810986    5736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:04:42.812158    5736 out.go:352] Setting JSON to false
	I0815 17:04:42.829177    5736 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3850,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:04:42.829265    5736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:04:42.834006    5736 out.go:177] * [no-preload-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:04:42.841875    5736 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:04:42.841965    5736 notify.go:220] Checking for updates...
	I0815 17:04:42.848948    5736 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:04:42.851923    5736 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:04:42.854992    5736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:04:42.857986    5736 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:04:42.860967    5736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:04:42.864282    5736 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:04:42.864343    5736 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:04:42.864386    5736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:04:42.868984    5736 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:04:42.878051    5736 start.go:297] selected driver: qemu2
	I0815 17:04:42.878063    5736 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:04:42.878072    5736 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:04:42.880438    5736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:04:42.883783    5736 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:04:42.887004    5736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:04:42.887023    5736 cni.go:84] Creating CNI manager for ""
	I0815 17:04:42.887030    5736 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:04:42.887034    5736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:04:42.887061    5736 start.go:340] cluster config:
	{Name:no-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:42.890747    5736 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.899886    5736 out.go:177] * Starting "no-preload-066000" primary control-plane node in "no-preload-066000" cluster
	I0815 17:04:42.903978    5736 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:04:42.904051    5736 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/no-preload-066000/config.json ...
	I0815 17:04:42.904070    5736 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/no-preload-066000/config.json: {Name:mk86c20c0174f10ad4d2d5f48ca11b01f08132c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:04:42.904093    5736 cache.go:107] acquiring lock: {Name:mk5ebd5d9fabf0d0ad1dd23fa899fc4d8a6c6372 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904093    5736 cache.go:107] acquiring lock: {Name:mk6e9e1f1ce1be342cc27bbace4cb70efe9cc45d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904132    5736 cache.go:107] acquiring lock: {Name:mkf0428cf3c6704f57aac01b10817eeae3ab98d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904150    5736 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 17:04:42.904162    5736 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.875µs
	I0815 17:04:42.904170    5736 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 17:04:42.904175    5736 cache.go:107] acquiring lock: {Name:mke1fd1c1394231d7cbb9be17fa1281e78522d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904231    5736 cache.go:107] acquiring lock: {Name:mkb28e624bf9dce290f648916c84012761012b7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904250    5736 cache.go:107] acquiring lock: {Name:mkfe0a62d51ca271bdb32ca22575e4900d17546d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904281    5736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 17:04:42.904262    5736 cache.go:107] acquiring lock: {Name:mk69ee381b0cae9fc0dfd5bf4da924629ffc2c6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904264    5736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 17:04:42.904335    5736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 17:04:42.904340    5736 start.go:360] acquireMachinesLock for no-preload-066000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:42.904368    5736 cache.go:107] acquiring lock: {Name:mk3c9482a26a1f7dc0463ecc2a15facbbc09d16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:42.904394    5736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 17:04:42.904455    5736 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 17:04:42.904480    5736 start.go:364] duration metric: took 133.583µs to acquireMachinesLock for "no-preload-066000"
	I0815 17:04:42.904495    5736 start.go:93] Provisioning new machine with config: &{Name:no-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:42.904547    5736 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:42.904583    5736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 17:04:42.904587    5736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 17:04:42.908868    5736 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:04:42.916909    5736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 17:04:42.916917    5736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 17:04:42.918901    5736 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 17:04:42.918993    5736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 17:04:42.919000    5736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 17:04:42.919054    5736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 17:04:42.919053    5736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 17:04:42.925675    5736 start.go:159] libmachine.API.Create for "no-preload-066000" (driver="qemu2")
	I0815 17:04:42.925700    5736 client.go:168] LocalClient.Create starting
	I0815 17:04:42.925797    5736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:42.925828    5736 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:42.925839    5736 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:42.925876    5736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:42.925900    5736 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:42.925906    5736 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:42.926282    5736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:43.082948    5736 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:43.243855    5736 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:43.243871    5736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:43.244137    5736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:43.253572    5736 main.go:141] libmachine: STDOUT: 
	I0815 17:04:43.253593    5736 main.go:141] libmachine: STDERR: 
	I0815 17:04:43.253635    5736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2 +20000M
	I0815 17:04:43.261995    5736 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:43.262008    5736 main.go:141] libmachine: STDERR: 
	I0815 17:04:43.262019    5736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:43.262022    5736 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:43.262032    5736 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:43.262055    5736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:de:78:c0:9d:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:43.263908    5736 main.go:141] libmachine: STDOUT: 
	I0815 17:04:43.263925    5736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:43.263942    5736 client.go:171] duration metric: took 338.238125ms to LocalClient.Create
	I0815 17:04:43.315759    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0815 17:04:43.318425    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0815 17:04:43.336281    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 17:04:43.350021    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 17:04:43.388502    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 17:04:43.405700    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 17:04:43.430948    5736 cache.go:162] opening:  /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 17:04:43.484794    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0815 17:04:43.484808    5736 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 580.592125ms
	I0815 17:04:43.484813    5736 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0815 17:04:45.264131    5736 start.go:128] duration metric: took 2.359563125s to createHost
	I0815 17:04:45.264160    5736 start.go:83] releasing machines lock for "no-preload-066000", held for 2.359667291s
	W0815 17:04:45.264206    5736 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:45.279124    5736 out.go:177] * Deleting "no-preload-066000" in qemu2 ...
	W0815 17:04:45.298390    5736 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:45.298403    5736 start.go:729] Will try again in 5 seconds ...
	I0815 17:04:45.961712    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0815 17:04:45.961753    5736 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.057567958s
	I0815 17:04:45.961772    5736 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0815 17:04:46.860402    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0815 17:04:46.860435    5736 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.956133875s
	I0815 17:04:46.860450    5736 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0815 17:04:47.626726    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0815 17:04:47.626777    5736 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.722641709s
	I0815 17:04:47.626799    5736 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0815 17:04:47.917094    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0815 17:04:47.917116    5736 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 5.01301775s
	I0815 17:04:47.917128    5736 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0815 17:04:48.216555    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0815 17:04:48.216584    5736 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.312375875s
	I0815 17:04:48.216598    5736 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0815 17:04:50.298547    5736 start.go:360] acquireMachinesLock for no-preload-066000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:50.298892    5736 start.go:364] duration metric: took 289.417µs to acquireMachinesLock for "no-preload-066000"
	I0815 17:04:50.298990    5736 start.go:93] Provisioning new machine with config: &{Name:no-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:50.299159    5736 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:04:50.312665    5736 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:04:50.356273    5736 start.go:159] libmachine.API.Create for "no-preload-066000" (driver="qemu2")
	I0815 17:04:50.356325    5736 client.go:168] LocalClient.Create starting
	I0815 17:04:50.356452    5736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:04:50.356508    5736 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:50.356529    5736 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:50.356586    5736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:04:50.356637    5736 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:50.356651    5736 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:50.357115    5736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:04:50.515585    5736 main.go:141] libmachine: Creating SSH key...
	I0815 17:04:50.536126    5736 cache.go:157] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0815 17:04:50.536143    5736 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.631940458s
	I0815 17:04:50.536154    5736 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0815 17:04:50.536168    5736 cache.go:87] Successfully saved all images to host disk.
	I0815 17:04:50.604379    5736 main.go:141] libmachine: Creating Disk image...
	I0815 17:04:50.604387    5736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:04:50.604662    5736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:50.614358    5736 main.go:141] libmachine: STDOUT: 
	I0815 17:04:50.614383    5736 main.go:141] libmachine: STDERR: 
	I0815 17:04:50.614430    5736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2 +20000M
	I0815 17:04:50.622594    5736 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:04:50.622609    5736 main.go:141] libmachine: STDERR: 
	I0815 17:04:50.622618    5736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:50.622623    5736 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:04:50.622638    5736 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:50.622678    5736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:1a:9d:a3:45:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:50.624474    5736 main.go:141] libmachine: STDOUT: 
	I0815 17:04:50.624490    5736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:50.624502    5736 client.go:171] duration metric: took 268.172375ms to LocalClient.Create
	I0815 17:04:52.625191    5736 start.go:128] duration metric: took 2.326008375s to createHost
	I0815 17:04:52.625223    5736 start.go:83] releasing machines lock for "no-preload-066000", held for 2.326306334s
	W0815 17:04:52.625345    5736 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:52.637532    5736 out.go:201] 
	W0815 17:04:52.645526    5736 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:52.645543    5736 out.go:270] * 
	* 
	W0815 17:04:52.646068    5736 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:04:52.656472    5736 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (38.729416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-066000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-066000 create -f testdata/busybox.yaml: exit status 1 (28.046291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-066000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (32.128625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (39.254208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-066000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-066000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-066000 describe deploy/metrics-server -n kube-system: exit status 1 (29.184583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-066000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (29.973167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.175359375s)

                                                
                                                
-- stdout --
	* [no-preload-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-066000" primary control-plane node in "no-preload-066000" cluster
	* Restarting existing qemu2 VM for "no-preload-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:04:56.172608    5817 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:56.172738    5817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:56.172742    5817 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:56.172744    5817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:56.172863    5817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:04:56.173930    5817 out.go:352] Setting JSON to false
	I0815 17:04:56.190278    5817 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3864,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:04:56.190350    5817 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:04:56.194802    5817 out.go:177] * [no-preload-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:04:56.201652    5817 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:04:56.201691    5817 notify.go:220] Checking for updates...
	I0815 17:04:56.208815    5817 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:04:56.211806    5817 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:04:56.214812    5817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:04:56.217812    5817 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:04:56.219253    5817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:04:56.223101    5817 config.go:182] Loaded profile config "no-preload-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:04:56.223355    5817 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:04:56.227826    5817 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 17:04:56.232729    5817 start.go:297] selected driver: qemu2
	I0815 17:04:56.232736    5817 start.go:901] validating driver "qemu2" against &{Name:no-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:56.232787    5817 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:04:56.235013    5817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:04:56.235039    5817 cni.go:84] Creating CNI manager for ""
	I0815 17:04:56.235045    5817 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:04:56.235077    5817 start.go:340] cluster config:
	{Name:no-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-066000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:56.238512    5817 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.245700    5817 out.go:177] * Starting "no-preload-066000" primary control-plane node in "no-preload-066000" cluster
	I0815 17:04:56.249838    5817 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:04:56.249922    5817 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/no-preload-066000/config.json ...
	I0815 17:04:56.249940    5817 cache.go:107] acquiring lock: {Name:mk5ebd5d9fabf0d0ad1dd23fa899fc4d8a6c6372 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.249983    5817 cache.go:107] acquiring lock: {Name:mkfe0a62d51ca271bdb32ca22575e4900d17546d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.249960    5817 cache.go:107] acquiring lock: {Name:mk6e9e1f1ce1be342cc27bbace4cb70efe9cc45d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.250016    5817 cache.go:107] acquiring lock: {Name:mk69ee381b0cae9fc0dfd5bf4da924629ffc2c6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.250030    5817 cache.go:107] acquiring lock: {Name:mke1fd1c1394231d7cbb9be17fa1281e78522d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.250074    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0815 17:04:56.250082    5817 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 66.792µs
	I0815 17:04:56.250088    5817 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0815 17:04:56.250074    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0815 17:04:56.250021    5817 cache.go:107] acquiring lock: {Name:mkb28e624bf9dce290f648916c84012761012b7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.250105    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0815 17:04:56.250111    5817 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 173.167µs
	I0815 17:04:56.250118    5817 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0815 17:04:56.250032    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 17:04:56.250086    5817 cache.go:107] acquiring lock: {Name:mk3c9482a26a1f7dc0463ecc2a15facbbc09d16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.250137    5817 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 192µs
	I0815 17:04:56.250152    5817 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 17:04:56.250094    5817 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 143.166µs
	I0815 17:04:56.250182    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0815 17:04:56.250187    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0815 17:04:56.250181    5817 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0815 17:04:56.250192    5817 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 171.541µs
	I0815 17:04:56.250222    5817 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0815 17:04:56.250162    5817 cache.go:107] acquiring lock: {Name:mkf0428cf3c6704f57aac01b10817eeae3ab98d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:04:56.250190    5817 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 121.333µs
	I0815 17:04:56.250257    5817 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0815 17:04:56.250096    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0815 17:04:56.250266    5817 cache.go:115] /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0815 17:04:56.250276    5817 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 142.167µs
	I0815 17:04:56.250280    5817 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0815 17:04:56.250265    5817 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 235.5µs
	I0815 17:04:56.250285    5817 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0815 17:04:56.250288    5817 cache.go:87] Successfully saved all images to host disk.
	I0815 17:04:56.250364    5817 start.go:360] acquireMachinesLock for no-preload-066000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:56.250401    5817 start.go:364] duration metric: took 31.291µs to acquireMachinesLock for "no-preload-066000"
	I0815 17:04:56.250411    5817 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:04:56.250416    5817 fix.go:54] fixHost starting: 
	I0815 17:04:56.250532    5817 fix.go:112] recreateIfNeeded on no-preload-066000: state=Stopped err=<nil>
	W0815 17:04:56.250540    5817 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:04:56.257719    5817 out.go:177] * Restarting existing qemu2 VM for "no-preload-066000" ...
	I0815 17:04:56.261840    5817 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:04:56.261887    5817 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:1a:9d:a3:45:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:04:56.263909    5817 main.go:141] libmachine: STDOUT: 
	I0815 17:04:56.263929    5817 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:04:56.263953    5817 fix.go:56] duration metric: took 13.538125ms for fixHost
	I0815 17:04:56.263957    5817 start.go:83] releasing machines lock for "no-preload-066000", held for 13.551541ms
	W0815 17:04:56.263964    5817 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:04:56.263991    5817 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:04:56.263996    5817 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:01.266247    5817 start.go:360] acquireMachinesLock for no-preload-066000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:01.266706    5817 start.go:364] duration metric: took 365.625µs to acquireMachinesLock for "no-preload-066000"
	I0815 17:05:01.266871    5817 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:01.266896    5817 fix.go:54] fixHost starting: 
	I0815 17:05:01.267564    5817 fix.go:112] recreateIfNeeded on no-preload-066000: state=Stopped err=<nil>
	W0815 17:05:01.267593    5817 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:01.271335    5817 out.go:177] * Restarting existing qemu2 VM for "no-preload-066000" ...
	I0815 17:05:01.277045    5817 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:01.277368    5817 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:1a:9d:a3:45:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/no-preload-066000/disk.qcow2
	I0815 17:05:01.287687    5817 main.go:141] libmachine: STDOUT: 
	I0815 17:05:01.287771    5817 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:01.287902    5817 fix.go:56] duration metric: took 20.996042ms for fixHost
	I0815 17:05:01.287925    5817 start.go:83] releasing machines lock for "no-preload-066000", held for 21.192292ms
	W0815 17:05:01.288193    5817 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:01.296873    5817 out.go:201] 
	W0815 17:05:01.300112    5817 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:01.300138    5817 out.go:270] * 
	* 
	W0815 17:05:01.302006    5817 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:01.311148    5817 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (64.78275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-066000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (33.155458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-066000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.177125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (29.526458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-066000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (29.766209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-066000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-066000 --alsologtostderr -v=1: exit status 83 (43.180125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-066000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-066000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:01.576033    5836 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:01.576202    5836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:01.576209    5836 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:01.576212    5836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:01.576350    5836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:01.576604    5836 out.go:352] Setting JSON to false
	I0815 17:05:01.576614    5836 mustload.go:65] Loading cluster: no-preload-066000
	I0815 17:05:01.576815    5836 config.go:182] Loaded profile config "no-preload-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:01.582626    5836 out.go:177] * The control-plane node no-preload-066000 host is not running: state=Stopped
	I0815 17:05:01.586636    5836 out.go:177]   To start a cluster, run: "minikube start -p no-preload-066000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-066000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (30.091791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (29.525875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-645000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-645000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (12.001941916s)

                                                
                                                
-- stdout --
	* [embed-certs-645000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-645000" primary control-plane node in "embed-certs-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:01.899992    5853 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:01.900111    5853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:01.900115    5853 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:01.900118    5853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:01.900255    5853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:01.901418    5853 out.go:352] Setting JSON to false
	I0815 17:05:01.917920    5853 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3869,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:05:01.917990    5853 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:01.921534    5853 out.go:177] * [embed-certs-645000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:05:01.928634    5853 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:01.928739    5853 notify.go:220] Checking for updates...
	I0815 17:05:01.934588    5853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:05:01.937629    5853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:05:01.940648    5853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:01.945813    5853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:05:01.948616    5853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:01.951885    5853 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:01.951946    5853 config.go:182] Loaded profile config "stopped-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 17:05:01.951998    5853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:01.956536    5853 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:05:01.963543    5853 start.go:297] selected driver: qemu2
	I0815 17:05:01.963552    5853 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:05:01.963559    5853 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:01.965739    5853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:01.968621    5853 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:05:01.971679    5853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:01.971711    5853 cni.go:84] Creating CNI manager for ""
	I0815 17:05:01.971721    5853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:01.971725    5853 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:01.971757    5853 start.go:340] cluster config:
	{Name:embed-certs-645000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:01.975258    5853 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:01.982591    5853 out.go:177] * Starting "embed-certs-645000" primary control-plane node in "embed-certs-645000" cluster
	I0815 17:05:01.986551    5853 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:01.986567    5853 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:05:01.986577    5853 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:01.986634    5853 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:01.986640    5853 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:01.986702    5853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/embed-certs-645000/config.json ...
	I0815 17:05:01.986713    5853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/embed-certs-645000/config.json: {Name:mkd525370d611893c9763a660623aaac4ab6c5ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:01.986972    5853 start.go:360] acquireMachinesLock for embed-certs-645000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:01.987005    5853 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "embed-certs-645000"
	I0815 17:05:01.987019    5853 start.go:93] Provisioning new machine with config: &{Name:embed-certs-645000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:01.987049    5853 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:05:01.991632    5853 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:05:02.006714    5853 start.go:159] libmachine.API.Create for "embed-certs-645000" (driver="qemu2")
	I0815 17:05:02.006737    5853 client.go:168] LocalClient.Create starting
	I0815 17:05:02.006798    5853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:05:02.006829    5853 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:02.006841    5853 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:02.006878    5853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:05:02.006900    5853 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:02.006908    5853 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:02.007235    5853 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:05:02.160423    5853 main.go:141] libmachine: Creating SSH key...
	I0815 17:05:02.232791    5853 main.go:141] libmachine: Creating Disk image...
	I0815 17:05:02.232796    5853 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:05:02.233025    5853 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:02.242334    5853 main.go:141] libmachine: STDOUT: 
	I0815 17:05:02.242353    5853 main.go:141] libmachine: STDERR: 
	I0815 17:05:02.242398    5853 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2 +20000M
	I0815 17:05:02.250338    5853 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:05:02.250356    5853 main.go:141] libmachine: STDERR: 
	I0815 17:05:02.250372    5853 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:02.250375    5853 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:05:02.250390    5853 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:02.250420    5853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:d7:67:0c:b6:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:02.252065    5853 main.go:141] libmachine: STDOUT: 
	I0815 17:05:02.252084    5853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:02.252101    5853 client.go:171] duration metric: took 245.359584ms to LocalClient.Create
	I0815 17:05:04.254328    5853 start.go:128] duration metric: took 2.267232792s to createHost
	I0815 17:05:04.254403    5853 start.go:83] releasing machines lock for "embed-certs-645000", held for 2.267380458s
	W0815 17:05:04.254488    5853 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:04.261757    5853 out.go:177] * Deleting "embed-certs-645000" in qemu2 ...
	W0815 17:05:04.284525    5853 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:04.284553    5853 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:09.286661    5853 start.go:360] acquireMachinesLock for embed-certs-645000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:11.347784    5853 start.go:364] duration metric: took 2.061077625s to acquireMachinesLock for "embed-certs-645000"
	I0815 17:05:11.347971    5853 start.go:93] Provisioning new machine with config: &{Name:embed-certs-645000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:11.348315    5853 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:05:11.353915    5853 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:05:11.405170    5853 start.go:159] libmachine.API.Create for "embed-certs-645000" (driver="qemu2")
	I0815 17:05:11.405216    5853 client.go:168] LocalClient.Create starting
	I0815 17:05:11.405341    5853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:05:11.405408    5853 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:11.405429    5853 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:11.405505    5853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:05:11.405561    5853 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:11.405576    5853 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:11.406073    5853 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:05:11.571538    5853 main.go:141] libmachine: Creating SSH key...
	I0815 17:05:11.800148    5853 main.go:141] libmachine: Creating Disk image...
	I0815 17:05:11.800157    5853 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:05:11.800469    5853 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:11.810412    5853 main.go:141] libmachine: STDOUT: 
	I0815 17:05:11.810445    5853 main.go:141] libmachine: STDERR: 
	I0815 17:05:11.810502    5853 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2 +20000M
	I0815 17:05:11.818514    5853 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:05:11.818531    5853 main.go:141] libmachine: STDERR: 
	I0815 17:05:11.818539    5853 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:11.818544    5853 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:05:11.818560    5853 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:11.818589    5853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:83:61:6a:4f:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:11.820274    5853 main.go:141] libmachine: STDOUT: 
	I0815 17:05:11.820291    5853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:11.820308    5853 client.go:171] duration metric: took 415.083875ms to LocalClient.Create
	I0815 17:05:13.822585    5853 start.go:128] duration metric: took 2.474206083s to createHost
	I0815 17:05:13.822649    5853 start.go:83] releasing machines lock for "embed-certs-645000", held for 2.474815916s
	W0815 17:05:13.823069    5853 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:13.840870    5853 out.go:201] 
	W0815 17:05:13.846768    5853 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:13.846800    5853 out.go:270] * 
	* 
	W0815 17:05:13.849377    5853 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:13.859658    5853 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-645000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (64.74825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (12.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-911000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-911000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.746720208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-911000" primary control-plane node in "default-k8s-diff-port-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:08.989209    5873 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:08.989318    5873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:08.989321    5873 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:08.989323    5873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:08.989446    5873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:08.990612    5873 out.go:352] Setting JSON to false
	I0815 17:05:09.007005    5873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3876,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:05:09.007078    5873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:09.011938    5873 out.go:177] * [default-k8s-diff-port-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:05:09.017913    5873 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:09.017959    5873 notify.go:220] Checking for updates...
	I0815 17:05:09.024834    5873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:05:09.027878    5873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:05:09.030912    5873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:09.033898    5873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:05:09.040915    5873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:09.044082    5873 config.go:182] Loaded profile config "embed-certs-645000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:09.044140    5873 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:09.044194    5873 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:09.048870    5873 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:05:09.054806    5873 start.go:297] selected driver: qemu2
	I0815 17:05:09.054814    5873 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:05:09.054820    5873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:09.057259    5873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:09.059911    5873 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:05:09.062951    5873 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:09.062987    5873 cni.go:84] Creating CNI manager for ""
	I0815 17:05:09.062997    5873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:09.063001    5873 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:09.063024    5873 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:09.066963    5873 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:09.074871    5873 out.go:177] * Starting "default-k8s-diff-port-911000" primary control-plane node in "default-k8s-diff-port-911000" cluster
	I0815 17:05:09.078879    5873 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:09.078893    5873 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:05:09.078904    5873 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:09.078965    5873 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:09.078970    5873 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:09.079027    5873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/default-k8s-diff-port-911000/config.json ...
	I0815 17:05:09.079039    5873 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/default-k8s-diff-port-911000/config.json: {Name:mkc7eebfc456dcaa0f9abaa36f153864c9fbb4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:09.079390    5873 start.go:360] acquireMachinesLock for default-k8s-diff-port-911000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:09.079423    5873 start.go:364] duration metric: took 26.083µs to acquireMachinesLock for "default-k8s-diff-port-911000"
	I0815 17:05:09.079436    5873 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:09.079483    5873 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:05:09.083985    5873 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:05:09.101479    5873 start.go:159] libmachine.API.Create for "default-k8s-diff-port-911000" (driver="qemu2")
	I0815 17:05:09.101507    5873 client.go:168] LocalClient.Create starting
	I0815 17:05:09.101562    5873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:05:09.101594    5873 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:09.101603    5873 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:09.101639    5873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:05:09.101663    5873 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:09.101671    5873 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:09.102123    5873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:05:09.265759    5873 main.go:141] libmachine: Creating SSH key...
	I0815 17:05:09.326405    5873 main.go:141] libmachine: Creating Disk image...
	I0815 17:05:09.326410    5873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:05:09.326604    5873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:09.335671    5873 main.go:141] libmachine: STDOUT: 
	I0815 17:05:09.335688    5873 main.go:141] libmachine: STDERR: 
	I0815 17:05:09.335729    5873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2 +20000M
	I0815 17:05:09.343627    5873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:05:09.343648    5873 main.go:141] libmachine: STDERR: 
	I0815 17:05:09.343660    5873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:09.343665    5873 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:05:09.343678    5873 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:09.343701    5873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:0d:99:ff:75:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:09.345354    5873 main.go:141] libmachine: STDOUT: 
	I0815 17:05:09.345368    5873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:09.345387    5873 client.go:171] duration metric: took 243.874958ms to LocalClient.Create
	I0815 17:05:11.347569    5873 start.go:128] duration metric: took 2.26805675s to createHost
	I0815 17:05:11.347634    5873 start.go:83] releasing machines lock for "default-k8s-diff-port-911000", held for 2.268190709s
	W0815 17:05:11.347715    5873 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:11.363854    5873 out.go:177] * Deleting "default-k8s-diff-port-911000" in qemu2 ...
	W0815 17:05:11.386478    5873 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:11.386498    5873 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:16.388812    5873 start.go:360] acquireMachinesLock for default-k8s-diff-port-911000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:16.389308    5873 start.go:364] duration metric: took 395.333µs to acquireMachinesLock for "default-k8s-diff-port-911000"
	I0815 17:05:16.389478    5873 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:16.389685    5873 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:05:16.396575    5873 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:05:16.446891    5873 start.go:159] libmachine.API.Create for "default-k8s-diff-port-911000" (driver="qemu2")
	I0815 17:05:16.446955    5873 client.go:168] LocalClient.Create starting
	I0815 17:05:16.447076    5873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:05:16.447134    5873 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:16.447151    5873 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:16.447241    5873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:05:16.447297    5873 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:16.447309    5873 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:16.447814    5873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:05:16.605611    5873 main.go:141] libmachine: Creating SSH key...
	I0815 17:05:16.638660    5873 main.go:141] libmachine: Creating Disk image...
	I0815 17:05:16.638667    5873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:05:16.638912    5873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:16.647983    5873 main.go:141] libmachine: STDOUT: 
	I0815 17:05:16.648000    5873 main.go:141] libmachine: STDERR: 
	I0815 17:05:16.648051    5873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2 +20000M
	I0815 17:05:16.656125    5873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:05:16.656144    5873 main.go:141] libmachine: STDERR: 
	I0815 17:05:16.656169    5873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:16.656207    5873 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:05:16.656213    5873 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:16.656236    5873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:56:10:ae:7d:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:16.657856    5873 main.go:141] libmachine: STDOUT: 
	I0815 17:05:16.657870    5873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:16.657882    5873 client.go:171] duration metric: took 210.920292ms to LocalClient.Create
	I0815 17:05:18.660101    5873 start.go:128] duration metric: took 2.270362417s to createHost
	I0815 17:05:18.660172    5873 start.go:83] releasing machines lock for "default-k8s-diff-port-911000", held for 2.270830583s
	W0815 17:05:18.660493    5873 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:18.674179    5873 out.go:201] 
	W0815 17:05:18.682269    5873 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:18.682295    5873 out.go:270] * 
	* 
	W0815 17:05:18.684884    5873 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:18.692141    5873 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-911000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (69.393625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-645000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-645000 create -f testdata/busybox.yaml: exit status 1 (29.003916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-645000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-645000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (28.536834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (29.313042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-645000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-645000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-645000 describe deploy/metrics-server -n kube-system: exit status 1 (27.028584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-645000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-645000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (29.182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-645000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-645000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.355753084s)

                                                
                                                
-- stdout --
	* [embed-certs-645000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-645000" primary control-plane node in "embed-certs-645000" cluster
	* Restarting existing qemu2 VM for "embed-certs-645000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-645000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:17.430514    5929 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:17.430669    5929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:17.430673    5929 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:17.430675    5929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:17.430815    5929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:17.432074    5929 out.go:352] Setting JSON to false
	I0815 17:05:17.448296    5929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3885,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:05:17.448367    5929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:17.453452    5929 out.go:177] * [embed-certs-645000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:05:17.460478    5929 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:17.460518    5929 notify.go:220] Checking for updates...
	I0815 17:05:17.467430    5929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:05:17.470493    5929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:05:17.473401    5929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:17.476489    5929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:05:17.479449    5929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:17.482746    5929 config.go:182] Loaded profile config "embed-certs-645000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:17.482990    5929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:17.487447    5929 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 17:05:17.494427    5929 start.go:297] selected driver: qemu2
	I0815 17:05:17.494439    5929 start.go:901] validating driver "qemu2" against &{Name:embed-certs-645000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:17.494539    5929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:17.496786    5929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:17.496821    5929 cni.go:84] Creating CNI manager for ""
	I0815 17:05:17.496828    5929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:17.496859    5929 start.go:340] cluster config:
	{Name:embed-certs-645000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-645000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:17.500420    5929 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:17.503568    5929 out.go:177] * Starting "embed-certs-645000" primary control-plane node in "embed-certs-645000" cluster
	I0815 17:05:17.506364    5929 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:17.506380    5929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:05:17.506388    5929 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:17.506447    5929 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:17.506453    5929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:17.506511    5929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/embed-certs-645000/config.json ...
	I0815 17:05:17.506923    5929 start.go:360] acquireMachinesLock for embed-certs-645000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:18.660340    5929 start.go:364] duration metric: took 1.153368291s to acquireMachinesLock for "embed-certs-645000"
	I0815 17:05:18.660626    5929 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:18.660660    5929 fix.go:54] fixHost starting: 
	I0815 17:05:18.661356    5929 fix.go:112] recreateIfNeeded on embed-certs-645000: state=Stopped err=<nil>
	W0815 17:05:18.661400    5929 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:18.678022    5929 out.go:177] * Restarting existing qemu2 VM for "embed-certs-645000" ...
	I0815 17:05:18.685190    5929 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:18.685421    5929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:83:61:6a:4f:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:18.694882    5929 main.go:141] libmachine: STDOUT: 
	I0815 17:05:18.694955    5929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:18.695072    5929 fix.go:56] duration metric: took 34.420917ms for fixHost
	I0815 17:05:18.695088    5929 start.go:83] releasing machines lock for "embed-certs-645000", held for 34.665666ms
	W0815 17:05:18.695113    5929 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:18.695256    5929 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:18.695270    5929 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:23.697513    5929 start.go:360] acquireMachinesLock for embed-certs-645000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:23.697948    5929 start.go:364] duration metric: took 280.417µs to acquireMachinesLock for "embed-certs-645000"
	I0815 17:05:23.698075    5929 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:23.698094    5929 fix.go:54] fixHost starting: 
	I0815 17:05:23.698824    5929 fix.go:112] recreateIfNeeded on embed-certs-645000: state=Stopped err=<nil>
	W0815 17:05:23.698852    5929 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:23.704515    5929 out.go:177] * Restarting existing qemu2 VM for "embed-certs-645000" ...
	I0815 17:05:23.712481    5929 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:23.712700    5929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:83:61:6a:4f:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/embed-certs-645000/disk.qcow2
	I0815 17:05:23.721681    5929 main.go:141] libmachine: STDOUT: 
	I0815 17:05:23.721757    5929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:23.721847    5929 fix.go:56] duration metric: took 23.753833ms for fixHost
	I0815 17:05:23.721864    5929 start.go:83] releasing machines lock for "embed-certs-645000", held for 23.893709ms
	W0815 17:05:23.722070    5929 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-645000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-645000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:23.730376    5929 out.go:201] 
	W0815 17:05:23.734440    5929 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:23.734473    5929 out.go:270] * 
	* 
	W0815 17:05:23.736847    5929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:23.749403    5929 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-645000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (67.37125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-911000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-911000 create -f testdata/busybox.yaml: exit status 1 (30.930166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-911000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-911000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (29.19275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (28.462709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-911000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-911000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-911000 describe deploy/metrics-server -n kube-system: exit status 1 (26.582917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-911000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-911000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (28.989042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-911000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-911000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.199470708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-911000" primary control-plane node in "default-k8s-diff-port-911000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:22.652884    5970 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:22.653025    5970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:22.653029    5970 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:22.653031    5970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:22.653161    5970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:22.654099    5970 out.go:352] Setting JSON to false
	I0815 17:05:22.670341    5970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3890,"bootTime":1723762832,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:05:22.670406    5970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:22.675516    5970 out.go:177] * [default-k8s-diff-port-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:05:22.682497    5970 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:22.682563    5970 notify.go:220] Checking for updates...
	I0815 17:05:22.689406    5970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:05:22.693496    5970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:05:22.696459    5970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:22.699468    5970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:05:22.702459    5970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:22.705800    5970 config.go:182] Loaded profile config "default-k8s-diff-port-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:22.706057    5970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:22.710394    5970 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 17:05:22.717540    5970 start.go:297] selected driver: qemu2
	I0815 17:05:22.717548    5970 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:22.717624    5970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:22.720085    5970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:22.720131    5970 cni.go:84] Creating CNI manager for ""
	I0815 17:05:22.720138    5970 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:22.720160    5970 start.go:340] cluster config:
	{Name:default-k8s-diff-port-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-911000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:22.723848    5970 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:22.732460    5970 out.go:177] * Starting "default-k8s-diff-port-911000" primary control-plane node in "default-k8s-diff-port-911000" cluster
	I0815 17:05:22.736482    5970 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:22.736501    5970 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:05:22.736513    5970 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:22.736581    5970 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:22.736588    5970 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:22.736662    5970 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/default-k8s-diff-port-911000/config.json ...
	I0815 17:05:22.737165    5970 start.go:360] acquireMachinesLock for default-k8s-diff-port-911000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:22.737196    5970 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "default-k8s-diff-port-911000"
	I0815 17:05:22.737207    5970 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:22.737213    5970 fix.go:54] fixHost starting: 
	I0815 17:05:22.737341    5970 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911000: state=Stopped err=<nil>
	W0815 17:05:22.737350    5970 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:22.741495    5970 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-911000" ...
	I0815 17:05:22.751462    5970 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:22.751501    5970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:56:10:ae:7d:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:22.753524    5970 main.go:141] libmachine: STDOUT: 
	I0815 17:05:22.753543    5970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:22.753573    5970 fix.go:56] duration metric: took 16.361125ms for fixHost
	I0815 17:05:22.753577    5970 start.go:83] releasing machines lock for "default-k8s-diff-port-911000", held for 16.376333ms
	W0815 17:05:22.753585    5970 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:22.753621    5970 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:22.753626    5970 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:27.755934    5970 start.go:360] acquireMachinesLock for default-k8s-diff-port-911000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:27.756363    5970 start.go:364] duration metric: took 338.875µs to acquireMachinesLock for "default-k8s-diff-port-911000"
	I0815 17:05:27.756501    5970 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:27.756525    5970 fix.go:54] fixHost starting: 
	I0815 17:05:27.757347    5970 fix.go:112] recreateIfNeeded on default-k8s-diff-port-911000: state=Stopped err=<nil>
	W0815 17:05:27.757375    5970 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:27.773824    5970 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-911000" ...
	I0815 17:05:27.777679    5970 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:27.777900    5970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:56:10:ae:7d:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/default-k8s-diff-port-911000/disk.qcow2
	I0815 17:05:27.787395    5970 main.go:141] libmachine: STDOUT: 
	I0815 17:05:27.787464    5970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:27.787564    5970 fix.go:56] duration metric: took 31.040208ms for fixHost
	I0815 17:05:27.787583    5970 start.go:83] releasing machines lock for "default-k8s-diff-port-911000", held for 31.196416ms
	W0815 17:05:27.787785    5970 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:27.797677    5970 out.go:201] 
	W0815 17:05:27.800760    5970 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:27.800817    5970 out.go:270] * 
	* 
	W0815 17:05:27.803176    5970 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:27.811681    5970 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-911000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (67.5645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-645000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (32.241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-645000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.257125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-645000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (29.966209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-645000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (29.858084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-645000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-645000 --alsologtostderr -v=1: exit status 83 (41.655209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-645000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-645000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:24.014417    5989 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:24.014596    5989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:24.014599    5989 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:24.014602    5989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:24.014754    5989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:24.014967    5989 out.go:352] Setting JSON to false
	I0815 17:05:24.014976    5989 mustload.go:65] Loading cluster: embed-certs-645000
	I0815 17:05:24.015176    5989 config.go:182] Loaded profile config "embed-certs-645000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:24.019479    5989 out.go:177] * The control-plane node embed-certs-645000 host is not running: state=Stopped
	I0815 17:05:24.023503    5989 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-645000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-645000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (28.587458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (29.212541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-523000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-523000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.905553417s)

                                                
                                                
-- stdout --
	* [newest-cni-523000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-523000" primary control-plane node in "newest-cni-523000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-523000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:24.329979    6006 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:24.330104    6006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:24.330107    6006 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:24.330110    6006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:24.330249    6006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:24.331277    6006 out.go:352] Setting JSON to false
	I0815 17:05:24.347577    6006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3892,"bootTime":1723762832,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:05:24.347652    6006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:24.351555    6006 out.go:177] * [newest-cni-523000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:05:24.358534    6006 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:24.358553    6006 notify.go:220] Checking for updates...
	I0815 17:05:24.365502    6006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:05:24.368554    6006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:05:24.371528    6006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:24.374538    6006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:05:24.377500    6006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:24.380922    6006 config.go:182] Loaded profile config "default-k8s-diff-port-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:24.380990    6006 config.go:182] Loaded profile config "multinode-700000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:24.381049    6006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:24.385475    6006 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 17:05:24.391617    6006 start.go:297] selected driver: qemu2
	I0815 17:05:24.391626    6006 start.go:901] validating driver "qemu2" against <nil>
	I0815 17:05:24.391633    6006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:24.393849    6006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0815 17:05:24.393870    6006 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0815 17:05:24.402456    6006 out.go:177] * Automatically selected the socket_vmnet network
	I0815 17:05:24.405474    6006 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 17:05:24.405490    6006 cni.go:84] Creating CNI manager for ""
	I0815 17:05:24.405497    6006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:24.405501    6006 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:24.405529    6006 start.go:340] cluster config:
	{Name:newest-cni-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-523000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:24.409054    6006 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:24.416497    6006 out.go:177] * Starting "newest-cni-523000" primary control-plane node in "newest-cni-523000" cluster
	I0815 17:05:24.420441    6006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:24.420459    6006 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:05:24.420467    6006 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:24.420523    6006 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:24.420529    6006 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:24.420586    6006 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/newest-cni-523000/config.json ...
	I0815 17:05:24.420597    6006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/newest-cni-523000/config.json: {Name:mk4bedc0475d39f7914eaf717710690ab2b061a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:24.420818    6006 start.go:360] acquireMachinesLock for newest-cni-523000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:24.420852    6006 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "newest-cni-523000"
	I0815 17:05:24.420865    6006 start.go:93] Provisioning new machine with config: &{Name:newest-cni-523000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-523000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:24.420895    6006 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:05:24.429431    6006 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:05:24.446547    6006 start.go:159] libmachine.API.Create for "newest-cni-523000" (driver="qemu2")
	I0815 17:05:24.446569    6006 client.go:168] LocalClient.Create starting
	I0815 17:05:24.446647    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:05:24.446676    6006 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:24.446688    6006 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:24.446724    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:05:24.446748    6006 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:24.446755    6006 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:24.447140    6006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:05:24.598052    6006 main.go:141] libmachine: Creating SSH key...
	I0815 17:05:24.799824    6006 main.go:141] libmachine: Creating Disk image...
	I0815 17:05:24.799831    6006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:05:24.800122    6006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:24.809885    6006 main.go:141] libmachine: STDOUT: 
	I0815 17:05:24.809908    6006 main.go:141] libmachine: STDERR: 
	I0815 17:05:24.809956    6006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2 +20000M
	I0815 17:05:24.817957    6006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:05:24.817975    6006 main.go:141] libmachine: STDERR: 
	I0815 17:05:24.817988    6006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:24.817994    6006 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:05:24.818003    6006 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:24.818048    6006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:88:ad:d1:a1:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:24.819718    6006 main.go:141] libmachine: STDOUT: 
	I0815 17:05:24.819736    6006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:24.819754    6006 client.go:171] duration metric: took 373.18ms to LocalClient.Create
	I0815 17:05:26.821938    6006 start.go:128] duration metric: took 2.401012584s to createHost
	I0815 17:05:26.821994    6006 start.go:83] releasing machines lock for "newest-cni-523000", held for 2.401124334s
	W0815 17:05:26.822064    6006 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:26.829235    6006 out.go:177] * Deleting "newest-cni-523000" in qemu2 ...
	W0815 17:05:26.857727    6006 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:26.857746    6006 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:31.860048    6006 start.go:360] acquireMachinesLock for newest-cni-523000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:31.860539    6006 start.go:364] duration metric: took 403.625µs to acquireMachinesLock for "newest-cni-523000"
	I0815 17:05:31.860715    6006 start.go:93] Provisioning new machine with config: &{Name:newest-cni-523000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-523000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:31.861079    6006 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 17:05:31.868703    6006 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:05:31.921049    6006 start.go:159] libmachine.API.Create for "newest-cni-523000" (driver="qemu2")
	I0815 17:05:31.921097    6006 client.go:168] LocalClient.Create starting
	I0815 17:05:31.921203    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/ca.pem
	I0815 17:05:31.921275    6006 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:31.921295    6006 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:31.921354    6006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-964/.minikube/certs/cert.pem
	I0815 17:05:31.921398    6006 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:31.921411    6006 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:31.921915    6006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19452-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0815 17:05:32.086174    6006 main.go:141] libmachine: Creating SSH key...
	I0815 17:05:32.142320    6006 main.go:141] libmachine: Creating Disk image...
	I0815 17:05:32.142327    6006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 17:05:32.142533    6006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2.raw /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:32.151759    6006 main.go:141] libmachine: STDOUT: 
	I0815 17:05:32.151778    6006 main.go:141] libmachine: STDERR: 
	I0815 17:05:32.151834    6006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2 +20000M
	I0815 17:05:32.159727    6006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 17:05:32.159742    6006 main.go:141] libmachine: STDERR: 
	I0815 17:05:32.159750    6006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:32.159755    6006 main.go:141] libmachine: Starting QEMU VM...
	I0815 17:05:32.159767    6006 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:32.159807    6006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:63:69:ed:9d:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:32.161474    6006 main.go:141] libmachine: STDOUT: 
	I0815 17:05:32.161489    6006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:32.161502    6006 client.go:171] duration metric: took 240.399292ms to LocalClient.Create
	I0815 17:05:34.163686    6006 start.go:128] duration metric: took 2.302551333s to createHost
	I0815 17:05:34.163737    6006 start.go:83] releasing machines lock for "newest-cni-523000", held for 2.303167959s
	W0815 17:05:34.164200    6006 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-523000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-523000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:34.178838    6006 out.go:201] 
	W0815 17:05:34.182057    6006 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:34.182090    6006 out.go:270] * 
	* 
	W0815 17:05:34.184822    6006 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:34.195852    6006 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-523000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000: exit status 7 (70.06475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-523000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-911000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (32.228834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-911000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-911000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-911000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.7095ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-911000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-911000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (29.500667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-911000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (29.440958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-911000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-911000 --alsologtostderr -v=1: exit status 83 (39.008417ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-911000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:28.079622    6028 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:28.079772    6028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:28.079775    6028 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:28.079778    6028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:28.079929    6028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:28.080157    6028 out.go:352] Setting JSON to false
	I0815 17:05:28.080166    6028 mustload.go:65] Loading cluster: default-k8s-diff-port-911000
	I0815 17:05:28.080342    6028 config.go:182] Loaded profile config "default-k8s-diff-port-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:28.083475    6028 out.go:177] * The control-plane node default-k8s-diff-port-911000 host is not running: state=Stopped
	I0815 17:05:28.087400    6028 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-911000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-911000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (28.616708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (28.649917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-523000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-523000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.185518875s)

                                                
                                                
-- stdout --
	* [newest-cni-523000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-523000" primary control-plane node in "newest-cni-523000" cluster
	* Restarting existing qemu2 VM for "newest-cni-523000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-523000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:38.216497    6075 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:38.216646    6075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:38.216649    6075 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:38.216652    6075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:38.216777    6075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:38.217751    6075 out.go:352] Setting JSON to false
	I0815 17:05:38.234164    6075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3906,"bootTime":1723762832,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 17:05:38.234248    6075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:38.238861    6075 out.go:177] * [newest-cni-523000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 17:05:38.245843    6075 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:38.245906    6075 notify.go:220] Checking for updates...
	I0815 17:05:38.252812    6075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 17:05:38.255812    6075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 17:05:38.258816    6075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:38.262011    6075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 17:05:38.264842    6075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:38.268153    6075 config.go:182] Loaded profile config "newest-cni-523000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:38.268401    6075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:38.272824    6075 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 17:05:38.279856    6075 start.go:297] selected driver: qemu2
	I0815 17:05:38.279865    6075 start.go:901] validating driver "qemu2" against &{Name:newest-cni-523000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-523000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:38.279929    6075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:38.282340    6075 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 17:05:38.282369    6075 cni.go:84] Creating CNI manager for ""
	I0815 17:05:38.282376    6075 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:38.282405    6075 start.go:340] cluster config:
	{Name:newest-cni-523000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-523000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:38.285973    6075 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:38.293786    6075 out.go:177] * Starting "newest-cni-523000" primary control-plane node in "newest-cni-523000" cluster
	I0815 17:05:38.297865    6075 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:38.297883    6075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 17:05:38.297894    6075 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:38.297953    6075 preload.go:172] Found /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:38.297959    6075 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:38.298025    6075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/newest-cni-523000/config.json ...
	I0815 17:05:38.298466    6075 start.go:360] acquireMachinesLock for newest-cni-523000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:38.298494    6075 start.go:364] duration metric: took 21.292µs to acquireMachinesLock for "newest-cni-523000"
	I0815 17:05:38.298503    6075 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:38.298508    6075 fix.go:54] fixHost starting: 
	I0815 17:05:38.298634    6075 fix.go:112] recreateIfNeeded on newest-cni-523000: state=Stopped err=<nil>
	W0815 17:05:38.298643    6075 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:38.301780    6075 out.go:177] * Restarting existing qemu2 VM for "newest-cni-523000" ...
	I0815 17:05:38.309684    6075 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:38.309722    6075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:63:69:ed:9d:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:38.311719    6075 main.go:141] libmachine: STDOUT: 
	I0815 17:05:38.311741    6075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:38.311772    6075 fix.go:56] duration metric: took 13.264334ms for fixHost
	I0815 17:05:38.311776    6075 start.go:83] releasing machines lock for "newest-cni-523000", held for 13.2785ms
	W0815 17:05:38.311783    6075 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:38.311812    6075 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:38.311817    6075 start.go:729] Will try again in 5 seconds ...
	I0815 17:05:43.314000    6075 start.go:360] acquireMachinesLock for newest-cni-523000: {Name:mk614ce5fc7549970cfe95339a8290d1b8332c26 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:43.314476    6075 start.go:364] duration metric: took 377.833µs to acquireMachinesLock for "newest-cni-523000"
	I0815 17:05:43.314613    6075 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:05:43.314632    6075 fix.go:54] fixHost starting: 
	I0815 17:05:43.315276    6075 fix.go:112] recreateIfNeeded on newest-cni-523000: state=Stopped err=<nil>
	W0815 17:05:43.315300    6075 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:05:43.323872    6075 out.go:177] * Restarting existing qemu2 VM for "newest-cni-523000" ...
	I0815 17:05:43.327891    6075 qemu.go:418] Using hvf for hardware acceleration
	I0815 17:05:43.328144    6075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:63:69:ed:9d:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19452-964/.minikube/machines/newest-cni-523000/disk.qcow2
	I0815 17:05:43.337317    6075 main.go:141] libmachine: STDOUT: 
	I0815 17:05:43.337385    6075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 17:05:43.337454    6075 fix.go:56] duration metric: took 22.819459ms for fixHost
	I0815 17:05:43.337494    6075 start.go:83] releasing machines lock for "newest-cni-523000", held for 22.969583ms
	W0815 17:05:43.337693    6075 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-523000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-523000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 17:05:43.344901    6075 out.go:201] 
	W0815 17:05:43.349083    6075 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 17:05:43.349107    6075 out.go:270] * 
	* 
	W0815 17:05:43.351988    6075 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:43.359873    6075 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-523000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000: exit status 7 (67.047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-523000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-523000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000: exit status 7 (29.653166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-523000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-523000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-523000 --alsologtostderr -v=1: exit status 83 (40.839792ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-523000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-523000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:43.543253    6089 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:43.543409    6089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:43.543412    6089 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:43.543415    6089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:43.543539    6089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 17:05:43.543767    6089 out.go:352] Setting JSON to false
	I0815 17:05:43.543776    6089 mustload.go:65] Loading cluster: newest-cni-523000
	I0815 17:05:43.543994    6089 config.go:182] Loaded profile config "newest-cni-523000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:05:43.547199    6089 out.go:177] * The control-plane node newest-cni-523000 host is not running: state=Stopped
	I0815 17:05:43.551184    6089 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-523000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-523000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000: exit status 7 (29.88775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-523000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000: exit status 7 (29.648167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-523000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 8.55
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 196.5
29 TestAddons/serial/Volcano 37.36
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 14.31
34 TestAddons/parallel/Ingress 18.82
35 TestAddons/parallel/InspektorGadget 10.27
36 TestAddons/parallel/MetricsServer 5.27
39 TestAddons/parallel/CSI 50.13
40 TestAddons/parallel/Headlamp 17.64
41 TestAddons/parallel/CloudSpanner 6.17
42 TestAddons/parallel/LocalPath 10.58
43 TestAddons/parallel/NvidiaDevicePlugin 5.2
44 TestAddons/parallel/Yakd 10.34
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 11.62
56 TestErrorSpam/setup 34.56
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.26
59 TestErrorSpam/pause 0.66
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 64.3
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 75.38
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.11
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
73 TestFunctional/serial/CacheCmd/cache/add_local 1.15
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.67
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.84
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 39.6
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.69
84 TestFunctional/serial/LogsFileCmd 0.67
85 TestFunctional/serial/InvalidService 4.91
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 8.4
89 TestFunctional/parallel/DryRun 0.22
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.26
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 25.67
99 TestFunctional/parallel/SSHCmd 0.14
100 TestFunctional/parallel/CpCmd 0.43
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.42
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
111 TestFunctional/parallel/License 0.39
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.22
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.99
119 TestFunctional/parallel/ImageCommands/Setup 1.84
120 TestFunctional/parallel/DockerEnv/bash 0.29
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.11
137 TestFunctional/parallel/ServiceCmd/List 0.13
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.11
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
149 TestFunctional/parallel/ProfileCmd/profile_list 0.13
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
151 TestFunctional/parallel/MountCmd/any-port 5.13
152 TestFunctional/parallel/MountCmd/specific-port 0.8
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 182.65
161 TestMultiControlPlane/serial/DeployApp 4.56
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 54.76
164 TestMultiControlPlane/serial/NodeLabels 0.16
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.23
166 TestMultiControlPlane/serial/CopyFile 4.22
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.09
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 3.35
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
208 TestMainNoArgs 0.03
255 TestStoppedBinaryUpgrade/Setup 1
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
272 TestNoKubernetes/serial/ProfileList 31.48
273 TestNoKubernetes/serial/Stop 3.73
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStartStop/group/old-k8s-version/serial/Stop 3.62
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
301 TestStartStop/group/no-preload/serial/Stop 3.09
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/embed-certs/serial/Stop 3.14
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.52
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
334 TestStartStop/group/newest-cni/serial/Stop 3.72
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-953000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-953000: exit status 85 (97.876084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-953000 | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT |          |
	|         | -p download-only-953000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:05:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:05:16.036591    1448 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:05:16.036727    1448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:16.036731    1448 out.go:358] Setting ErrFile to fd 2...
	I0815 16:05:16.036733    1448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:16.036847    1448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	W0815 16:05:16.036922    1448 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19452-964/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19452-964/.minikube/config/config.json: no such file or directory
	I0815 16:05:16.038195    1448 out.go:352] Setting JSON to true
	I0815 16:05:16.055569    1448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":285,"bootTime":1723762831,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:05:16.055657    1448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:05:16.060230    1448 out.go:97] [download-only-953000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:05:16.060416    1448 notify.go:220] Checking for updates...
	W0815 16:05:16.060425    1448 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 16:05:16.066088    1448 out.go:169] MINIKUBE_LOCATION=19452
	I0815 16:05:16.072126    1448 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:05:16.075038    1448 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:05:16.079097    1448 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:05:16.082109    1448 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	W0815 16:05:16.088173    1448 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 16:05:16.088432    1448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:05:16.093097    1448 out.go:97] Using the qemu2 driver based on user configuration
	I0815 16:05:16.093115    1448 start.go:297] selected driver: qemu2
	I0815 16:05:16.093118    1448 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:05:16.093188    1448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:05:16.095037    1448 out.go:169] Automatically selected the socket_vmnet network
	I0815 16:05:16.101644    1448 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0815 16:05:16.101725    1448 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:05:16.101821    1448 cni.go:84] Creating CNI manager for ""
	I0815 16:05:16.101841    1448 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 16:05:16.101890    1448 start.go:340] cluster config:
	{Name:download-only-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:05:16.107130    1448 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:05:16.112152    1448 out.go:97] Downloading VM boot image ...
	I0815 16:05:16.112176    1448 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0815 16:05:23.725797    1448 out.go:97] Starting "download-only-953000" primary control-plane node in "download-only-953000" cluster
	I0815 16:05:23.725823    1448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:23.786654    1448 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 16:05:23.786677    1448 cache.go:56] Caching tarball of preloaded images
	I0815 16:05:23.786863    1448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:23.791874    1448 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 16:05:23.791881    1448 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:23.886969    1448 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 16:05:29.295115    1448 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:29.295268    1448 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:29.990493    1448 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 16:05:29.990695    1448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/download-only-953000/config.json ...
	I0815 16:05:29.990713    1448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/download-only-953000/config.json: {Name:mkb837f547f5160dfe32538295c6ec5d3deaeaf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:05:29.990928    1448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:29.991130    1448 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0815 16:05:30.529206    1448 out.go:193] 
	W0815 16:05:30.534308    1448 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104907960 0x104907960 0x104907960 0x104907960 0x104907960 0x104907960 0x104907960] Decompressors:map[bz2:0x14000647770 gz:0x14000647778 tar:0x14000647700 tar.bz2:0x14000647710 tar.gz:0x14000647720 tar.xz:0x14000647730 tar.zst:0x14000647760 tbz2:0x14000647710 tgz:0x14000647720 txz:0x14000647730 tzst:0x14000647760 xz:0x14000647780 zip:0x14000647790 zst:0x14000647788] Getters:map[file:0x14000768b40 http:0x14000a0a280 https:0x14000a0a2d0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0815 16:05:30.534332    1448 out_reason.go:110] 
	W0815 16:05:30.543166    1448 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:05:30.546252    1448 out.go:193] 
	
	
	* The control-plane node download-only-953000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-953000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-953000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (8.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-154000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-154000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (8.549727875s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (8.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-154000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-154000: exit status 85 (75.4845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-953000 | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT |                     |
	|         | -p download-only-953000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT | 15 Aug 24 16:05 PDT |
	| delete  | -p download-only-953000        | download-only-953000 | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT | 15 Aug 24 16:05 PDT |
	| start   | -o=json --download-only        | download-only-154000 | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT |                     |
	|         | -p download-only-154000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:05:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:05:30.955704    1477 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:05:30.955851    1477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:30.955854    1477 out.go:358] Setting ErrFile to fd 2...
	I0815 16:05:30.955856    1477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:30.955990    1477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:05:30.957006    1477 out.go:352] Setting JSON to true
	I0815 16:05:30.973206    1477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":299,"bootTime":1723762831,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:05:30.973274    1477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:05:30.978492    1477 out.go:97] [download-only-154000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:05:30.978563    1477 notify.go:220] Checking for updates...
	I0815 16:05:30.982404    1477 out.go:169] MINIKUBE_LOCATION=19452
	I0815 16:05:30.985506    1477 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:05:30.989521    1477 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:05:30.992434    1477 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:05:30.995480    1477 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	W0815 16:05:31.001380    1477 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 16:05:31.001550    1477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:05:31.004403    1477 out.go:97] Using the qemu2 driver based on user configuration
	I0815 16:05:31.004411    1477 start.go:297] selected driver: qemu2
	I0815 16:05:31.004415    1477 start.go:901] validating driver "qemu2" against <nil>
	I0815 16:05:31.004467    1477 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:05:31.007427    1477 out.go:169] Automatically selected the socket_vmnet network
	I0815 16:05:31.012666    1477 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0815 16:05:31.012746    1477 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:05:31.012781    1477 cni.go:84] Creating CNI manager for ""
	I0815 16:05:31.012789    1477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:05:31.012797    1477 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:05:31.012842    1477 start.go:340] cluster config:
	{Name:download-only-154000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-154000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:05:31.016232    1477 iso.go:125] acquiring lock: {Name:mk61e0bf8bcd6d7ec7e3679fe0cc798f0ef82816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:05:31.019433    1477 out.go:97] Starting "download-only-154000" primary control-plane node in "download-only-154000" cluster
	I0815 16:05:31.019440    1477 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:05:31.087781    1477 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:05:31.087791    1477 cache.go:56] Caching tarball of preloaded images
	I0815 16:05:31.087960    1477 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:05:31.093146    1477 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 16:05:31.093155    1477 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:31.180708    1477 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 16:05:35.175625    1477 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:35.175958    1477 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19452-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 16:05:35.697393    1477 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:05:35.697625    1477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/download-only-154000/config.json ...
	I0815 16:05:35.697641    1477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/download-only-154000/config.json: {Name:mk6196c51659d3465f6efdf70c101150e09d4cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:05:35.699647    1477 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:05:35.699794    1477 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19452-964/.minikube/cache/darwin/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-154000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-154000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-154000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-752000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-752000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-156000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-156000: exit status 85 (55.151125ms)

                                                
                                                
-- stdout --
	* Profile "addons-156000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-156000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-156000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-156000: exit status 85 (59.091708ms)

                                                
                                                
-- stdout --
	* Profile "addons-156000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-156000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (196.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-156000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-156000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m16.496897833s)
--- PASS: TestAddons/Setup (196.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.773208ms
addons_test.go:905: volcano-admission stabilized in 7.814875ms
addons_test.go:913: volcano-controller stabilized in 7.834625ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-94j2v" [cd67adac-d93d-47b7-8667-e3be80ddb9c0] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.007525291s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-pnv78" [473cb0e6-f6b4-4235-8563-4d334b231a77] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00720475s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-hh9fx" [bc651b46-aaa6-4b56-abd3-49bb49afbdc6] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005590084s
addons_test.go:932: (dbg) Run:  kubectl --context addons-156000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-156000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-156000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [dfffb97a-51da-4163-bb58-3d844b48ef42] Pending
helpers_test.go:344: "test-job-nginx-0" [dfffb97a-51da-4163-bb58-3d844b48ef42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [dfffb97a-51da-4163-bb58-3d844b48ef42] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.0137355s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-156000 addons disable volcano --alsologtostderr -v=1: (10.112087083s)
--- PASS: TestAddons/serial/Volcano (37.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-156000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-156000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.090417ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-sr9sp" [f8879245-ded9-4a2b-adc1-c74a51744417] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004324166s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r7ckm" [ee3c609d-0b54-4439-abd4-ad5af192b740] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.021503583s
addons_test.go:342: (dbg) Run:  kubectl --context addons-156000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-156000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-156000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.006096542s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.31s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-156000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-156000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-156000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0b25f3b3-2227-46f3-937b-51eaaf4d3c0e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0b25f3b3-2227-46f3-937b-51eaaf4d3c0e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00849675s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-156000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-156000 addons disable ingress --alsologtostderr -v=1: (7.264953625s)
--- PASS: TestAddons/parallel/Ingress (18.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rbflw" [f6e63fad-2598-470c-91fa-c76731b03234] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005709708s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-156000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-156000: (5.264357667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.298041ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-dsjkg" [6c8884c0-c534-4ffc-8ffb-9493ad4a6158] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005763208s
addons_test.go:417: (dbg) Run:  kubectl --context addons-156000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.112166ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-156000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-156000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a99eb426-6f4b-4714-bb91-1d6f31234f85] Pending
helpers_test.go:344: "task-pv-pod" [a99eb426-6f4b-4714-bb91-1d6f31234f85] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a99eb426-6f4b-4714-bb91-1d6f31234f85] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.009035958s
addons_test.go:590: (dbg) Run:  kubectl --context addons-156000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-156000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-156000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-156000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-156000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-156000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/08/15 16:10:04 [DEBUG] GET http://192.168.105.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-156000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [513b2b10-0ca4-4a21-9176-e2a4445cd961] Pending
helpers_test.go:344: "task-pv-pod-restore" [513b2b10-0ca4-4a21-9176-e2a4445cd961] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [513b2b10-0ca4-4a21-9176-e2a4445cd961] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.008396834s
addons_test.go:632: (dbg) Run:  kubectl --context addons-156000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-156000 delete pod task-pv-pod-restore: (1.273042s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-156000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-156000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-156000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.090525791s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-156000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-ndb8k" [1e518738-f8ac-474d-ace4-c3844225d071] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-ndb8k" [1e518738-f8ac-474d-ace4-c3844225d071] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.01060575s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-156000 addons disable headlamp --alsologtostderr -v=1: (5.27683s)
--- PASS: TestAddons/parallel/Headlamp (17.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-gq4mr" [436aa1d8-a649-4f7e-9995-f509709a7614] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004445666s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-156000
--- PASS: TestAddons/parallel/CloudSpanner (6.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-156000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-156000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d5a01f60-d98a-424a-95b9-68b9f5184447] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d5a01f60-d98a-424a-95b9-68b9f5184447] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d5a01f60-d98a-424a-95b9-68b9f5184447] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004944666s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-156000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 ssh "cat /opt/local-path-provisioner/pvc-58471856-c58f-4a41-b3e3-6218574b4fdd_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-156000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-156000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8b6kr" [e372ca96-eeca-41c2-93e0-f0efc0951027] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014346125s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-156000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-97pdc" [c48597b8-0a9f-446c-8dec-4f9b340f0817] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005346s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-156000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-156000 addons disable yakd --alsologtostderr -v=1: (5.334439167s)
--- PASS: TestAddons/parallel/Yakd (10.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-156000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-156000: (12.208278584s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-156000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-156000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-156000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.62s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.62s)

                                                
                                    
x
+
TestErrorSpam/setup (34.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-954000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-954000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 --driver=qemu2 : (34.555483958s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (34.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 stop: (12.208135167s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 stop: (26.062177459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-954000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-954000 stop: (26.029780417s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19452-964/.minikube/files/etc/test/nested/copy/1446/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-899000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0815 16:13:56.891863    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:56.900846    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:56.914240    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:56.937636    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:56.981073    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.064530    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.227564    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.551006    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:58.194701    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:59.478378    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:02.042233    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:07.165868    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:17.409343    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-899000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m15.378939292s)
--- PASS: TestFunctional/serial/StartWithProxy (75.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-899000 --alsologtostderr -v=8
E0815 16:14:37.892338    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-899000 --alsologtostderr -v=8: (38.112674708s)
functional_test.go:663: soft start took 38.113188291s for "functional-899000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-899000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-899000 cache add registry.k8s.io/pause:3.1: (1.047569709s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1565007604/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cache add minikube-local-cache-test:functional-899000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cache delete minikube-local-cache-test:functional-899000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-899000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.5865ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 kubectl -- --context functional-899000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.84s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-899000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-899000 get pods: (1.018307542s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-899000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0815 16:15:18.855107    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-899000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.6026135s)
functional_test.go:761: restart took 39.602702042s for "functional-899000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-899000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2554462331/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-899000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-899000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-899000: exit status 115 (150.061792ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31902 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-899000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-899000 delete -f testdata/invalidsvc.yaml: (1.652327875s)
--- PASS: TestFunctional/serial/InvalidService (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 config get cpus: exit status 14 (30.905625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 config get cpus: exit status 14 (29.981375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-899000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-899000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2293: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-899000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-899000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.734209ms)

                                                
                                                
-- stdout --
	* [functional-899000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:16:37.707694    2270 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:16:37.707831    2270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:16:37.707835    2270 out.go:358] Setting ErrFile to fd 2...
	I0815 16:16:37.707837    2270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:16:37.707996    2270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:16:37.709226    2270 out.go:352] Setting JSON to false
	I0815 16:16:37.727449    2270 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":966,"bootTime":1723762831,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:16:37.727515    2270 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:16:37.732006    2270 out.go:177] * [functional-899000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 16:16:37.739175    2270 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:16:37.739222    2270 notify.go:220] Checking for updates...
	I0815 16:16:37.746054    2270 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:16:37.750149    2270 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:16:37.753113    2270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:16:37.756151    2270 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:16:37.759178    2270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:16:37.762339    2270 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:16:37.762581    2270 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:16:37.767093    2270 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 16:16:37.774211    2270 start.go:297] selected driver: qemu2
	I0815 16:16:37.774221    2270 start.go:901] validating driver "qemu2" against &{Name:functional-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:16:37.774294    2270 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:16:37.781128    2270 out.go:201] 
	W0815 16:16:37.784125    2270 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 16:16:37.787101    2270 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-899000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-899000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-899000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.952958ms)

                                                
                                                
-- stdout --
	* [functional-899000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:16:37.924723    2281 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:16:37.924825    2281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:16:37.924828    2281 out.go:358] Setting ErrFile to fd 2...
	I0815 16:16:37.924831    2281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:16:37.924974    2281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
	I0815 16:16:37.926432    2281 out.go:352] Setting JSON to false
	I0815 16:16:37.943192    2281 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":966,"bootTime":1723762831,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 16:16:37.943273    2281 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:16:37.948181    2281 out.go:177] * [functional-899000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0815 16:16:37.955214    2281 notify.go:220] Checking for updates...
	I0815 16:16:37.959097    2281 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:16:37.963161    2281 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	I0815 16:16:37.966103    2281 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 16:16:37.969161    2281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:16:37.972150    2281 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	I0815 16:16:37.975121    2281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:16:37.978412    2281 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:16:37.978668    2281 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:16:37.983175    2281 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0815 16:16:37.990133    2281 start.go:297] selected driver: qemu2
	I0815 16:16:37.990140    2281 start.go:901] validating driver "qemu2" against &{Name:functional-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:16:37.990194    2281 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:16:37.997100    2281 out.go:201] 
	W0815 16:16:38.001253    2281 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 16:16:38.005106    2281 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7795fb4f-b5b9-4231-bddb-fe511c29f7aa] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014594042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-899000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-899000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-899000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-899000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f06312c-dab2-4a50-b77b-1a55c16796eb] Pending
helpers_test.go:344: "sp-pod" [7f06312c-dab2-4a50-b77b-1a55c16796eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f06312c-dab2-4a50-b77b-1a55c16796eb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.0098865s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-899000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-899000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-899000 delete -f testdata/storage-provisioner/pod.yaml: (1.118221708s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-899000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c2a08fdb-4091-463e-a857-94455ae5e57a] Pending
helpers_test.go:344: "sp-pod" [c2a08fdb-4091-463e-a857-94455ae5e57a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c2a08fdb-4091-463e-a857-94455ae5e57a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010435291s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-899000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh -n functional-899000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cp functional-899000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd4243935652/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh -n functional-899000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh -n functional-899000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1446/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /etc/test/nested/copy/1446/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1446.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /etc/ssl/certs/1446.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1446.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /usr/share/ca-certificates/1446.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /etc/ssl/certs/14462.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /usr/share/ca-certificates/14462.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-899000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh "sudo systemctl is-active crio": exit status 1 (65.351125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-899000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-899000
docker.io/kicbase/echo-server:functional-899000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-899000 image ls --format short --alsologtostderr:
I0815 16:16:39.217336    2310 out.go:345] Setting OutFile to fd 1 ...
I0815 16:16:39.217505    2310 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:39.217509    2310 out.go:358] Setting ErrFile to fd 2...
I0815 16:16:39.217511    2310 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:39.217624    2310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:16:39.218039    2310 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:39.218107    2310 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:39.218924    2310 ssh_runner.go:195] Run: systemctl --version
I0815 16:16:39.218933    2310 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
I0815 16:16:39.247437    2310 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-899000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-899000 | cc6ea057232e6 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| docker.io/kicbase/echo-server               | functional-899000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| localhost/my-image                          | functional-899000 | de6ac9579b468 | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-899000 image ls --format table --alsologtostderr:
I0815 16:16:41.427540    2322 out.go:345] Setting OutFile to fd 1 ...
I0815 16:16:41.427690    2322 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:41.427694    2322 out.go:358] Setting ErrFile to fd 2...
I0815 16:16:41.427696    2322 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:41.427822    2322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:16:41.428314    2322 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:41.428375    2322 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:41.429206    2322 ssh_runner.go:195] Run: systemctl --version
I0815 16:16:41.429214    2322 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
I0815 16:16:41.457427    2322 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/08/15 16:16:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-899000 image ls --format json --alsologtostderr:
[{"id":"de6ac9579b46866d5fea6acd7a39f16e6a9f53c9f12696994b58452f7a1d7f94","repoDigests":[],"repoTags":["localhost/my-image:functional-899000"],"size":"1410000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-899000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"fcb0683e6bdbd083710cf2d6fd7
eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["d
ocker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"cc6ea057232e6d1bd2ebdddbccb0625c877b5fa0cc30c6c0279d32a0ba081bad","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-899000"],"size":"30"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-899000 image ls --format json --alsologtostderr:
I0815 16:16:41.354658    2320 out.go:345] Setting OutFile to fd 1 ...
I0815 16:16:41.354772    2320 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:41.354777    2320 out.go:358] Setting ErrFile to fd 2...
I0815 16:16:41.354779    2320 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:41.354889    2320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:16:41.355291    2320 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:41.355351    2320 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:41.356087    2320 ssh_runner.go:195] Run: systemctl --version
I0815 16:16:41.356097    2320 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
I0815 16:16:41.384424    2320 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-899000 image ls --format yaml --alsologtostderr:
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-899000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: cc6ea057232e6d1bd2ebdddbccb0625c877b5fa0cc30c6c0279d32a0ba081bad
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-899000
size: "30"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-899000 image ls --format yaml --alsologtostderr:
I0815 16:16:39.290306    2312 out.go:345] Setting OutFile to fd 1 ...
I0815 16:16:39.290491    2312 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:39.290495    2312 out.go:358] Setting ErrFile to fd 2...
I0815 16:16:39.290497    2312 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:39.290641    2312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:16:39.291127    2312 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:39.291195    2312 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:39.292078    2312 ssh_runner.go:195] Run: systemctl --version
I0815 16:16:39.292089    2312 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
I0815 16:16:39.321710    2312 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh pgrep buildkitd: exit status 1 (61.025916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image build -t localhost/my-image:functional-899000 testdata/build --alsologtostderr
E0815 16:16:40.776265    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-899000 image build -t localhost/my-image:functional-899000 testdata/build --alsologtostderr: (1.850629458s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-899000 image build -t localhost/my-image:functional-899000 testdata/build --alsologtostderr:
I0815 16:16:39.424874    2316 out.go:345] Setting OutFile to fd 1 ...
I0815 16:16:39.425060    2316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:39.425063    2316 out.go:358] Setting ErrFile to fd 2...
I0815 16:16:39.425066    2316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:16:39.425187    2316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-964/.minikube/bin
I0815 16:16:39.425557    2316 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:39.426106    2316 config.go:182] Loaded profile config "functional-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:16:39.426903    2316 ssh_runner.go:195] Run: systemctl --version
I0815 16:16:39.426911    2316 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-964/.minikube/machines/functional-899000/id_rsa Username:docker}
I0815 16:16:39.455152    2316 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.338651534.tar
I0815 16:16:39.455207    2316 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 16:16:39.459034    2316 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.338651534.tar
I0815 16:16:39.460530    2316 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.338651534.tar: stat -c "%s %y" /var/lib/minikube/build/build.338651534.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.338651534.tar': No such file or directory
I0815 16:16:39.460551    2316 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.338651534.tar --> /var/lib/minikube/build/build.338651534.tar (3072 bytes)
I0815 16:16:39.468410    2316 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.338651534
I0815 16:16:39.471965    2316 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.338651534 -xf /var/lib/minikube/build/build.338651534.tar
I0815 16:16:39.475475    2316 docker.go:360] Building image: /var/lib/minikube/build/build.338651534
I0815 16:16:39.475526    2316 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-899000 /var/lib/minikube/build/build.338651534
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:de6ac9579b46866d5fea6acd7a39f16e6a9f53c9f12696994b58452f7a1d7f94 done
#8 naming to localhost/my-image:functional-899000 done
#8 DONE 0.0s
I0815 16:16:41.227374    2316 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-899000 /var/lib/minikube/build/build.338651534: (1.751887625s)
I0815 16:16:41.227445    2316 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.338651534
I0815 16:16:41.232104    2316 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.338651534.tar
I0815 16:16:41.235718    2316 build_images.go:217] Built localhost/my-image:functional-899000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.338651534.tar
I0815 16:16:41.235733    2316 build_images.go:133] succeeded building to: functional-899000
I0815 16:16:41.235736    2316 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.821553209s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-899000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-899000 docker-env) && out/minikube-darwin-arm64 status -p functional-899000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-899000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-899000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-899000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-xqp5c" [8934fa28-b843-4249-b4ca-ac02080088d2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-xqp5c" [8934fa28-b843-4249-b4ca-ac02080088d2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.01002075s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image load --daemon kicbase/echo-server:functional-899000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image load --daemon kicbase/echo-server:functional-899000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-899000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image load --daemon kicbase/echo-server:functional-899000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image save kicbase/echo-server:functional-899000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image rm kicbase/echo-server:functional-899000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-899000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 image save --daemon kicbase/echo-server:functional-899000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-899000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-899000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-899000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-899000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-899000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2133: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-899000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-899000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [25b3058f-ffeb-4650-9968-256aa098df65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [25b3058f-ffeb-4650-9968-256aa098df65] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.008995s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 service list -o json
functional_test.go:1494: Took "87.609042ms" to run "out/minikube-darwin-arm64 -p functional-899000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31905
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31905
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-899000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.71.127 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-899000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "90.272875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.95725ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "92.055792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.514834ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port172792078/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723763791356602000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port172792078/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723763791356602000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port172792078/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723763791356602000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port172792078/001/test-1723763791356602000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.32875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 23:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 23:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 23:16 test-1723763791356602000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh cat /mount-9p/test-1723763791356602000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-899000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b] Pending
helpers_test.go:344: "busybox-mount" [d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d8c3b9f8-cb9d-44dd-9b33-e3c463bbf35b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005179792s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-899000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port172792078/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1556637293/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.744167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1556637293/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh "sudo umount -f /mount-9p": exit status 1 (66.123334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-899000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1556637293/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4012074900/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4012074900/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4012074900/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount1: exit status 1 (84.26775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount2: exit status 1 (62.920167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-899000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-899000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4012074900/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4012074900/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-899000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4012074900/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-899000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-899000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-899000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (182.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-719000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0815 16:18:56.880471    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:19:24.614755    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/addons-156000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-719000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m2.468333792s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (182.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-719000 -- rollout status deployment/busybox: (2.98010625s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-cb4bx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-hdbk9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-qr6cq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-cb4bx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-hdbk9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-qr6cq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-cb4bx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-hdbk9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-qr6cq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-cb4bx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-cb4bx -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-hdbk9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-hdbk9 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-qr6cq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-719000 -- exec busybox-7dff88458-qr6cq -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-719000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-719000 -v=7 --alsologtostderr: (54.538918667s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-719000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp testdata/cp-test.txt ha-719000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3442844892/001/cp-test_ha-719000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000:/home/docker/cp-test.txt ha-719000-m02:/home/docker/cp-test_ha-719000_ha-719000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test_ha-719000_ha-719000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000:/home/docker/cp-test.txt ha-719000-m03:/home/docker/cp-test_ha-719000_ha-719000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test_ha-719000_ha-719000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000:/home/docker/cp-test.txt ha-719000-m04:/home/docker/cp-test_ha-719000_ha-719000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test_ha-719000_ha-719000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp testdata/cp-test.txt ha-719000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3442844892/001/cp-test_ha-719000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m02:/home/docker/cp-test.txt ha-719000:/home/docker/cp-test_ha-719000-m02_ha-719000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test_ha-719000-m02_ha-719000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m02:/home/docker/cp-test.txt ha-719000-m03:/home/docker/cp-test_ha-719000-m02_ha-719000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test_ha-719000-m02_ha-719000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m02:/home/docker/cp-test.txt ha-719000-m04:/home/docker/cp-test_ha-719000-m02_ha-719000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test_ha-719000-m02_ha-719000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp testdata/cp-test.txt ha-719000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3442844892/001/cp-test_ha-719000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m03:/home/docker/cp-test.txt ha-719000:/home/docker/cp-test_ha-719000-m03_ha-719000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test_ha-719000-m03_ha-719000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m03:/home/docker/cp-test.txt ha-719000-m02:/home/docker/cp-test_ha-719000-m03_ha-719000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test_ha-719000-m03_ha-719000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m03:/home/docker/cp-test.txt ha-719000-m04:/home/docker/cp-test_ha-719000-m03_ha-719000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test_ha-719000-m03_ha-719000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp testdata/cp-test.txt ha-719000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3442844892/001/cp-test_ha-719000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m04:/home/docker/cp-test.txt ha-719000:/home/docker/cp-test_ha-719000-m04_ha-719000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000 "sudo cat /home/docker/cp-test_ha-719000-m04_ha-719000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m04:/home/docker/cp-test.txt ha-719000-m02:/home/docker/cp-test_ha-719000-m04_ha-719000-m02.txt
E0815 16:20:53.500945    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test.txt"
E0815 16:20:53.507570    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:20:53.520350    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:20:53.541821    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m02 "sudo cat /home/docker/cp-test_ha-719000-m04_ha-719000-m02.txt"
E0815 16:20:53.584329    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 cp ha-719000-m04:/home/docker/cp-test.txt ha-719000-m03:/home/docker/cp-test_ha-719000-m04_ha-719000-m03.txt
E0815 16:20:53.666014    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-719000 ssh -n ha-719000-m03 "sudo cat /home/docker/cp-test_ha-719000-m04_ha-719000-m03.txt"
E0815 16:20:53.828038    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0815 16:35:53.513099    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:37:16.592124    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.092682709s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-801000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-801000 --output=json --user=testUser: (3.347426708s)
--- PASS: TestJSONOutput/stop/Command (3.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-560000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-560000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.86225ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"37b0a9e3-fd98-4ff9-b261-32d52f2dc33a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-560000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b04b14d5-82da-464c-a39e-935a860cfe33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19452"}}
	{"specversion":"1.0","id":"c54912e8-f009-4e25-9836-94202cdeed9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig"}}
	{"specversion":"1.0","id":"1691af4e-d1b1-4cd6-862b-0fc68e330141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2b75ec6d-ff30-4dbe-bae5-3342673d1f05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a737900-addd-4531-9d8c-9aeea42736bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube"}}
	{"specversion":"1.0","id":"6db9da63-f167-4703-a5b8-2aed85463c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"90f6e040-73cf-498a-89c8-0a53efaa224d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-560000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-560000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-255000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.884417ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-255000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-255000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-255000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.585542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-255000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.790383459s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.691668834s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-255000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-255000: (3.731000125s)
--- PASS: TestNoKubernetes/serial/Stop (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-255000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-255000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.624667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-255000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-250000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-250000 --alsologtostderr -v=3: (3.621307292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-889000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-250000 -n old-k8s-version-250000: exit status 7 (39.205292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-250000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-066000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-066000 --alsologtostderr -v=3: (3.089968416s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-066000 -n no-preload-066000: exit status 7 (55.813042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-066000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-645000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-645000 --alsologtostderr -v=3: (3.139489625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-645000 -n embed-certs-645000: exit status 7 (54.741958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-645000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-911000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-911000 --alsologtostderr -v=3: (3.521709458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-911000 -n default-k8s-diff-port-911000: exit status 7 (55.91875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-911000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-523000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-523000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-523000 --alsologtostderr -v=3: (3.723389458s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-523000 -n newest-cni-523000: exit status 7 (57.690083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-523000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0815 16:50:53.641379    1446 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-964/.minikube/profiles/functional-899000/client.crt: no such file or directory" logger="UnhandledError"
panic.go:626: 
----------------------- debugLogs start: cilium-972000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                
----------------------- debugLogs end: cilium-972000 [took: 2.211358583s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-972000
--- SKIP: TestNetworkPlugins/group/cilium (2.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-000000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-000000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard