Test Report: QEMU_macOS 19522

                    
                      d15490255971b1813e1f056874620592048fd695:2024-08-27:35972
                    
                

Test fail (94/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.44
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.96
46 TestCertOptions 10.36
47 TestCertExpiration 195.39
48 TestDockerFlags 10.14
49 TestForceSystemdFlag 10.26
50 TestForceSystemdEnv 11.1
95 TestFunctional/parallel/ServiceCmdConnect 36.31
167 TestMultiControlPlane/serial/StopSecondaryNode 312.28
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.1
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.28
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.51
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.1
177 TestImageBuild/serial/Setup 10
180 TestJSONOutput/start/Command 9.75
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.05
209 TestMinikubeProfile 10.24
212 TestMountStart/serial/StartWithMountFirst 10.01
215 TestMultiNode/serial/FreshStart2Nodes 9.91
216 TestMultiNode/serial/DeployApp2Nodes 108.55
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.08
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.08
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.14
223 TestMultiNode/serial/StartAfterStop 54.36
224 TestMultiNode/serial/RestartKeepsNodes 8.29
225 TestMultiNode/serial/DeleteNode 0.1
226 TestMultiNode/serial/StopMultiNode 3.06
227 TestMultiNode/serial/RestartMultiNode 5.26
228 TestMultiNode/serial/ValidateNameConflict 20.25
232 TestPreload 9.96
234 TestScheduledStopUnix 10.02
235 TestSkaffold 12.39
238 TestRunningBinaryUpgrade 599.08
240 TestKubernetesUpgrade 17.25
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.48
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.01
256 TestStoppedBinaryUpgrade/Upgrade 573.94
258 TestPause/serial/Start 9.92
268 TestNoKubernetes/serial/StartWithK8s 9.96
269 TestNoKubernetes/serial/StartWithStopK8s 5.3
270 TestNoKubernetes/serial/Start 5.31
274 TestNoKubernetes/serial/StartNoArgs 5.35
276 TestNetworkPlugins/group/auto/Start 9.99
277 TestNetworkPlugins/group/flannel/Start 9.98
278 TestNetworkPlugins/group/enable-default-cni/Start 9.81
279 TestNetworkPlugins/group/kindnet/Start 9.79
280 TestNetworkPlugins/group/bridge/Start 9.82
281 TestNetworkPlugins/group/kubenet/Start 9.85
282 TestNetworkPlugins/group/custom-flannel/Start 10.08
283 TestNetworkPlugins/group/calico/Start 9.77
284 TestNetworkPlugins/group/false/Start 9.87
287 TestStartStop/group/old-k8s-version/serial/FirstStart 10.29
288 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/old-k8s-version/serial/Pause 0.1
298 TestStartStop/group/no-preload/serial/FirstStart 12.14
300 TestStartStop/group/embed-certs/serial/FirstStart 9.86
301 TestStartStop/group/no-preload/serial/DeployApp 0.09
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/no-preload/serial/SecondStart 5.26
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
310 TestStartStop/group/embed-certs/serial/SecondStart 5.26
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/no-preload/serial/Pause 0.1
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.92
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/embed-certs/serial/Pause 0.1
322 TestStartStop/group/newest-cni/serial/FirstStart 10.19
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.09
332 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
340 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (18.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-712000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-712000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.438388083s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e523492c-48c7-4bee-b6a7-af232dbd82b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-712000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"56765644-0fc6-4b88-8ede-9797b2d74808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"f1bf0430-13cf-4245-8b21-f9a3aa042c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig"}}
	{"specversion":"1.0","id":"10cacb18-e65b-421d-bfa2-d6c5e0e5778b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"dd475b51-2d9b-4858-8413-738c4cb213e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"40fb19a3-4074-4285-8982-b9d4227cbba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube"}}
	{"specversion":"1.0","id":"2fafdc5f-80ce-4c69-8772-07bf208c7cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"14adeb4d-0586-4eb0-863e-851d66fc2f78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"843c85ad-2c03-45a6-9932-2110eb6db1d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2f1f5400-1256-4889-a4c0-752fd7be58d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"332151a9-d3e5-4d0b-a8e4-53f5282c7a3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-712000\" primary control-plane node in \"download-only-712000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cd10148-084a-4b7e-8496-934bd004827f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"34cbf0ac-30c0-401c-8fad-6a0d561aaacd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920] Decompressors:map[bz2:0x1400065f900 gz:0x1400065f908 tar:0x1400065f8b0 tar.bz2:0x1400065f8c0 tar.gz:0x1400065f8d0 tar.xz:0x1400065f8e0 tar.zst:0x1400065f8f0 tbz2:0x1400065f8c0 tgz:0x140
0065f8d0 txz:0x1400065f8e0 tzst:0x1400065f8f0 xz:0x1400065f910 zip:0x1400065f920 zst:0x1400065f918] Getters:map[file:0x14000634670 http:0x1400017c230 https:0x1400017c280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"8d7dedf9-38b1-41a1-b650-47e65c31ee8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 14:36:31.487855    1465 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:36:31.487990    1465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:36:31.487993    1465 out.go:358] Setting ErrFile to fd 2...
	I0827 14:36:31.487996    1465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:36:31.488118    1465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	W0827 14:36:31.488213    1465 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19522-983/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19522-983/.minikube/config/config.json: no such file or directory
	I0827 14:36:31.489518    1465 out.go:352] Setting JSON to true
	I0827 14:36:31.507464    1465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":356,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 14:36:31.507527    1465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 14:36:31.512841    1465 out.go:97] [download-only-712000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 14:36:31.512956    1465 notify.go:220] Checking for updates...
	W0827 14:36:31.513019    1465 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball: no such file or directory
	I0827 14:36:31.515789    1465 out.go:169] MINIKUBE_LOCATION=19522
	I0827 14:36:31.519830    1465 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 14:36:31.524805    1465 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 14:36:31.527787    1465 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 14:36:31.530774    1465 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	W0827 14:36:31.536823    1465 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 14:36:31.537076    1465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 14:36:31.541782    1465 out.go:97] Using the qemu2 driver based on user configuration
	I0827 14:36:31.541801    1465 start.go:297] selected driver: qemu2
	I0827 14:36:31.541822    1465 start.go:901] validating driver "qemu2" against <nil>
	I0827 14:36:31.541899    1465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 14:36:31.544792    1465 out.go:169] Automatically selected the socket_vmnet network
	I0827 14:36:31.550577    1465 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0827 14:36:31.550673    1465 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 14:36:31.550758    1465 cni.go:84] Creating CNI manager for ""
	I0827 14:36:31.550777    1465 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0827 14:36:31.550828    1465 start.go:340] cluster config:
	{Name:download-only-712000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-712000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:36:31.556030    1465 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 14:36:31.560820    1465 out.go:97] Downloading VM boot image ...
	I0827 14:36:31.560849    1465 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso
	I0827 14:36:37.868979    1465 out.go:97] Starting "download-only-712000" primary control-plane node in "download-only-712000" cluster
	I0827 14:36:37.868997    1465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 14:36:37.932357    1465 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 14:36:37.932367    1465 cache.go:56] Caching tarball of preloaded images
	I0827 14:36:37.932539    1465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 14:36:37.937147    1465 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0827 14:36:37.937154    1465 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:38.024520    1465 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 14:36:48.637640    1465 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:48.637815    1465 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:49.331307    1465 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0827 14:36:49.331505    1465 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/download-only-712000/config.json ...
	I0827 14:36:49.331520    1465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/download-only-712000/config.json: {Name:mk14739c14f7bcda25e7b10d533a7a0346d39491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 14:36:49.331759    1465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 14:36:49.331975    1465 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0827 14:36:49.854688    1465 out.go:193] 
	W0827 14:36:49.860617    1465 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920] Decompressors:map[bz2:0x1400065f900 gz:0x1400065f908 tar:0x1400065f8b0 tar.bz2:0x1400065f8c0 tar.gz:0x1400065f8d0 tar.xz:0x1400065f8e0 tar.zst:0x1400065f8f0 tbz2:0x1400065f8c0 tgz:0x1400065f8d0 txz:0x1400065f8e0 tzst:0x1400065f8f0 xz:0x1400065f910 zip:0x1400065f920 zst:0x1400065f918] Getters:map[file:0x14000634670 http:0x1400017c230 https:0x1400017c280] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0827 14:36:49.860645    1465 out_reason.go:110] 
	W0827 14:36:49.869462    1465 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 14:36:49.873542    1465 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-712000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (18.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-403000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-403000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.812853417s)

                                                
                                                
-- stdout --
	* [offline-docker-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-403000" primary control-plane node in "offline-docker-403000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-403000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:21:27.377280    3518 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:21:27.377425    3518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:27.377428    3518 out.go:358] Setting ErrFile to fd 2...
	I0827 15:21:27.377431    3518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:27.377558    3518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:21:27.378705    3518 out.go:352] Setting JSON to false
	I0827 15:21:27.396604    3518 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3052,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:21:27.396679    3518 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:21:27.402501    3518 out.go:177] * [offline-docker-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:21:27.408410    3518 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:21:27.408429    3518 notify.go:220] Checking for updates...
	I0827 15:21:27.419510    3518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:21:27.422306    3518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:21:27.425416    3518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:21:27.428353    3518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:21:27.431422    3518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:21:27.434734    3518 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:21:27.434796    3518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:21:27.438354    3518 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:21:27.445420    3518 start.go:297] selected driver: qemu2
	I0827 15:21:27.445431    3518 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:21:27.445439    3518 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:21:27.447406    3518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:21:27.450365    3518 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:21:27.453454    3518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:21:27.453470    3518 cni.go:84] Creating CNI manager for ""
	I0827 15:21:27.453477    3518 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:21:27.453482    3518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:21:27.453512    3518 start.go:340] cluster config:
	{Name:offline-docker-403000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:21:27.457047    3518 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:21:27.464404    3518 out.go:177] * Starting "offline-docker-403000" primary control-plane node in "offline-docker-403000" cluster
	I0827 15:21:27.468405    3518 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:21:27.468441    3518 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:21:27.468449    3518 cache.go:56] Caching tarball of preloaded images
	I0827 15:21:27.468526    3518 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:21:27.468532    3518 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:21:27.468599    3518 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/offline-docker-403000/config.json ...
	I0827 15:21:27.468609    3518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/offline-docker-403000/config.json: {Name:mk474cef5c345b751a03e24c92b1365340a5b9d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:21:27.468894    3518 start.go:360] acquireMachinesLock for offline-docker-403000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:27.468933    3518 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "offline-docker-403000"
	I0827 15:21:27.468948    3518 start.go:93] Provisioning new machine with config: &{Name:offline-docker-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:27.468986    3518 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:27.477356    3518 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:27.493378    3518 start.go:159] libmachine.API.Create for "offline-docker-403000" (driver="qemu2")
	I0827 15:21:27.493419    3518 client.go:168] LocalClient.Create starting
	I0827 15:21:27.493502    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:27.493530    3518 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:27.493539    3518 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:27.493584    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:27.493607    3518 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:27.493614    3518 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:27.494003    3518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:27.646017    3518 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:27.670075    3518 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:27.670089    3518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:27.670327    3518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2
	I0827 15:21:27.681755    3518 main.go:141] libmachine: STDOUT: 
	I0827 15:21:27.681778    3518 main.go:141] libmachine: STDERR: 
	I0827 15:21:27.681830    3518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2 +20000M
	I0827 15:21:27.694467    3518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:27.694490    3518 main.go:141] libmachine: STDERR: 
	I0827 15:21:27.694508    3518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2
	I0827 15:21:27.694512    3518 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:27.694523    3518 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:27.694557    3518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:fd:85:04:7a:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2
	I0827 15:21:27.696179    3518 main.go:141] libmachine: STDOUT: 
	I0827 15:21:27.696201    3518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:27.696220    3518 client.go:171] duration metric: took 202.802333ms to LocalClient.Create
	I0827 15:21:29.698241    3518 start.go:128] duration metric: took 2.229320542s to createHost
	I0827 15:21:29.698261    3518 start.go:83] releasing machines lock for "offline-docker-403000", held for 2.229396083s
	W0827 15:21:29.698297    3518 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:29.714711    3518 out.go:177] * Deleting "offline-docker-403000" in qemu2 ...
	W0827 15:21:29.733318    3518 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:29.733328    3518 start.go:729] Will try again in 5 seconds ...
	I0827 15:21:34.735337    3518 start.go:360] acquireMachinesLock for offline-docker-403000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:34.735787    3518 start.go:364] duration metric: took 349µs to acquireMachinesLock for "offline-docker-403000"
	I0827 15:21:34.735925    3518 start.go:93] Provisioning new machine with config: &{Name:offline-docker-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:34.736198    3518 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:34.745814    3518 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:34.795613    3518 start.go:159] libmachine.API.Create for "offline-docker-403000" (driver="qemu2")
	I0827 15:21:34.795680    3518 client.go:168] LocalClient.Create starting
	I0827 15:21:34.795804    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:34.795869    3518 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:34.795888    3518 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:34.795986    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:34.796035    3518 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:34.796050    3518 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:34.796674    3518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:34.974479    3518 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:35.092806    3518 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:35.092820    3518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:35.093049    3518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2
	I0827 15:21:35.102544    3518 main.go:141] libmachine: STDOUT: 
	I0827 15:21:35.102561    3518 main.go:141] libmachine: STDERR: 
	I0827 15:21:35.102611    3518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2 +20000M
	I0827 15:21:35.110651    3518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:35.110666    3518 main.go:141] libmachine: STDERR: 
	I0827 15:21:35.110678    3518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2
	I0827 15:21:35.110683    3518 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:35.110694    3518 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:35.110724    3518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:15:a4:53:63:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/offline-docker-403000/disk.qcow2
	I0827 15:21:35.112315    3518 main.go:141] libmachine: STDOUT: 
	I0827 15:21:35.112331    3518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:35.112344    3518 client.go:171] duration metric: took 316.669292ms to LocalClient.Create
	I0827 15:21:37.114465    3518 start.go:128] duration metric: took 2.378313125s to createHost
	I0827 15:21:37.114631    3518 start.go:83] releasing machines lock for "offline-docker-403000", held for 2.378777458s
	W0827 15:21:37.114953    3518 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-403000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-403000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:37.130663    3518 out.go:201] 
	W0827 15:21:37.134786    3518 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:21:37.134827    3518 out.go:270] * 
	* 
	W0827 15:21:37.137563    3518 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:21:37.146673    3518 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-403000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-27 15:21:37.161784 -0700 PDT m=+2705.841550543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-403000 -n offline-docker-403000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-403000 -n offline-docker-403000: exit status 7 (66.463541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-403000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-403000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-403000
--- FAIL: TestOffline (9.96s)

                                                
                                    
x
+
TestCertOptions (10.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-737000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-737000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.094525166s)

                                                
                                                
-- stdout --
	* [cert-options-737000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-737000" primary control-plane node in "cert-options-737000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-737000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-737000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-737000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-737000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-737000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.715041ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-737000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-737000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-737000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-737000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-737000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-737000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.135959ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-737000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-737000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-737000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-737000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-737000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-27 15:22:08.794331 -0700 PDT m=+2737.475137835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-737000 -n cert-options-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-737000 -n cert-options-737000: exit status 7 (31.063917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-737000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-737000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-737000
--- FAIL: TestCertOptions (10.36s)

                                                
                                    
x
+
TestCertExpiration (195.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-658000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-658000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.036821917s)

                                                
                                                
-- stdout --
	* [cert-expiration-658000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-658000" primary control-plane node in "cert-expiration-658000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-658000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-658000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-658000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-658000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-658000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.213328792s)

                                                
                                                
-- stdout --
	* [cert-expiration-658000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-658000" primary control-plane node in "cert-expiration-658000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-658000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-658000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-658000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-658000" primary control-plane node in "cert-expiration-658000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-658000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-27 15:25:08.688088 -0700 PDT m=+2917.374814835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-658000 -n cert-expiration-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-658000 -n cert-expiration-658000: exit status 7 (58.225083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-658000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-658000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-658000
--- FAIL: TestCertExpiration (195.39s)

                                                
                                    
x
+
TestDockerFlags (10.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-032000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-032000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.903779583s)

                                                
                                                
-- stdout --
	* [docker-flags-032000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-032000" primary control-plane node in "docker-flags-032000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-032000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:21:48.439578    3711 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:21:48.439708    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:48.439712    3711 out.go:358] Setting ErrFile to fd 2...
	I0827 15:21:48.439714    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:48.439838    3711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:21:48.440918    3711 out.go:352] Setting JSON to false
	I0827 15:21:48.457146    3711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3073,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:21:48.457224    3711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:21:48.463305    3711 out.go:177] * [docker-flags-032000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:21:48.471174    3711 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:21:48.471219    3711 notify.go:220] Checking for updates...
	I0827 15:21:48.479218    3711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:21:48.482186    3711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:21:48.486170    3711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:21:48.489195    3711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:21:48.492206    3711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:21:48.495453    3711 config.go:182] Loaded profile config "force-systemd-flag-671000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:21:48.495522    3711 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:21:48.495563    3711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:21:48.500024    3711 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:21:48.507178    3711 start.go:297] selected driver: qemu2
	I0827 15:21:48.507186    3711 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:21:48.507191    3711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:21:48.509500    3711 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:21:48.513991    3711 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:21:48.517240    3711 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0827 15:21:48.517263    3711 cni.go:84] Creating CNI manager for ""
	I0827 15:21:48.517280    3711 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:21:48.517284    3711 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:21:48.517312    3711 start.go:340] cluster config:
	{Name:docker-flags-032000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-032000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:21:48.521236    3711 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:21:48.530178    3711 out.go:177] * Starting "docker-flags-032000" primary control-plane node in "docker-flags-032000" cluster
	I0827 15:21:48.534164    3711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:21:48.534182    3711 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:21:48.534191    3711 cache.go:56] Caching tarball of preloaded images
	I0827 15:21:48.534251    3711 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:21:48.534257    3711 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:21:48.534344    3711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/docker-flags-032000/config.json ...
	I0827 15:21:48.534359    3711 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/docker-flags-032000/config.json: {Name:mk3d7dd7383333f19d33f898fca3f60e65ea2cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:21:48.534573    3711 start.go:360] acquireMachinesLock for docker-flags-032000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:48.534609    3711 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "docker-flags-032000"
	I0827 15:21:48.534621    3711 start.go:93] Provisioning new machine with config: &{Name:docker-flags-032000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-032000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:48.534664    3711 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:48.543201    3711 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:48.561975    3711 start.go:159] libmachine.API.Create for "docker-flags-032000" (driver="qemu2")
	I0827 15:21:48.562008    3711 client.go:168] LocalClient.Create starting
	I0827 15:21:48.562075    3711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:48.562113    3711 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:48.562123    3711 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:48.562168    3711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:48.562194    3711 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:48.562200    3711 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:48.562566    3711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:48.713761    3711 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:48.836784    3711 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:48.836793    3711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:48.837032    3711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2
	I0827 15:21:48.846436    3711 main.go:141] libmachine: STDOUT: 
	I0827 15:21:48.846456    3711 main.go:141] libmachine: STDERR: 
	I0827 15:21:48.846517    3711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2 +20000M
	I0827 15:21:48.854561    3711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:48.854576    3711 main.go:141] libmachine: STDERR: 
	I0827 15:21:48.854600    3711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2
	I0827 15:21:48.854606    3711 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:48.854621    3711 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:48.854647    3711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:b9:ac:42:c1:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2
	I0827 15:21:48.856348    3711 main.go:141] libmachine: STDOUT: 
	I0827 15:21:48.856360    3711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:48.856380    3711 client.go:171] duration metric: took 294.3775ms to LocalClient.Create
	I0827 15:21:50.858506    3711 start.go:128] duration metric: took 2.323895959s to createHost
	I0827 15:21:50.858628    3711 start.go:83] releasing machines lock for "docker-flags-032000", held for 2.3240295s
	W0827 15:21:50.858687    3711 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:50.878649    3711 out.go:177] * Deleting "docker-flags-032000" in qemu2 ...
	W0827 15:21:50.900437    3711 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:50.900455    3711 start.go:729] Will try again in 5 seconds ...
	I0827 15:21:55.902488    3711 start.go:360] acquireMachinesLock for docker-flags-032000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:55.902930    3711 start.go:364] duration metric: took 304.083µs to acquireMachinesLock for "docker-flags-032000"
	I0827 15:21:55.903057    3711 start.go:93] Provisioning new machine with config: &{Name:docker-flags-032000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-032000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:55.903369    3711 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:55.912834    3711 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:55.962610    3711 start.go:159] libmachine.API.Create for "docker-flags-032000" (driver="qemu2")
	I0827 15:21:55.962658    3711 client.go:168] LocalClient.Create starting
	I0827 15:21:55.962765    3711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:55.962837    3711 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:55.962855    3711 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:55.962919    3711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:55.962964    3711 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:55.962976    3711 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:55.963844    3711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:56.131910    3711 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:56.250378    3711 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:56.250384    3711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:56.250595    3711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2
	I0827 15:21:56.260084    3711 main.go:141] libmachine: STDOUT: 
	I0827 15:21:56.260104    3711 main.go:141] libmachine: STDERR: 
	I0827 15:21:56.260156    3711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2 +20000M
	I0827 15:21:56.268108    3711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:56.268123    3711 main.go:141] libmachine: STDERR: 
	I0827 15:21:56.268133    3711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2
	I0827 15:21:56.268138    3711 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:56.268159    3711 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:56.268189    3711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:fc:ec:d6:ba:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/docker-flags-032000/disk.qcow2
	I0827 15:21:56.269809    3711 main.go:141] libmachine: STDOUT: 
	I0827 15:21:56.269825    3711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:56.269836    3711 client.go:171] duration metric: took 307.183167ms to LocalClient.Create
	I0827 15:21:58.271947    3711 start.go:128] duration metric: took 2.368625625s to createHost
	I0827 15:21:58.272043    3711 start.go:83] releasing machines lock for "docker-flags-032000", held for 2.369164833s
	W0827 15:21:58.272402    3711 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:58.286015    3711 out.go:201] 
	W0827 15:21:58.290011    3711 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:21:58.290045    3711 out.go:270] * 
	* 
	W0827 15:21:58.292688    3711 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:21:58.302986    3711 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-032000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-032000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-032000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.33225ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-032000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-032000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-032000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-032000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-032000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-032000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-032000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-032000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-032000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.598792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-032000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-032000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-032000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-032000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-032000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-032000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-27 15:21:58.439695 -0700 PDT m=+2727.120160668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-032000 -n docker-flags-032000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-032000 -n docker-flags-032000: exit status 7 (28.643333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-032000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-032000
--- FAIL: TestDockerFlags (10.14s)

                                                
                                    
x
+
TestForceSystemdFlag (10.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-671000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-671000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.067550916s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-671000" primary control-plane node in "force-systemd-flag-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:21:43.209725    3690 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:21:43.209872    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:43.209875    3690 out.go:358] Setting ErrFile to fd 2...
	I0827 15:21:43.209878    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:43.210003    3690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:21:43.211069    3690 out.go:352] Setting JSON to false
	I0827 15:21:43.227203    3690 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3068,"bootTime":1724794235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:21:43.227270    3690 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:21:43.234062    3690 out.go:177] * [force-systemd-flag-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:21:43.241050    3690 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:21:43.241081    3690 notify.go:220] Checking for updates...
	I0827 15:21:43.250902    3690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:21:43.254850    3690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:21:43.257953    3690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:21:43.260993    3690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:21:43.264024    3690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:21:43.267383    3690 config.go:182] Loaded profile config "force-systemd-env-232000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:21:43.267455    3690 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:21:43.267500    3690 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:21:43.271944    3690 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:21:43.278979    3690 start.go:297] selected driver: qemu2
	I0827 15:21:43.278985    3690 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:21:43.278990    3690 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:21:43.281363    3690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:21:43.284004    3690 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:21:43.287124    3690 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 15:21:43.287152    3690 cni.go:84] Creating CNI manager for ""
	I0827 15:21:43.287159    3690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:21:43.287164    3690 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:21:43.287190    3690 start.go:340] cluster config:
	{Name:force-systemd-flag-671000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:21:43.290941    3690 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:21:43.299996    3690 out.go:177] * Starting "force-systemd-flag-671000" primary control-plane node in "force-systemd-flag-671000" cluster
	I0827 15:21:43.303979    3690 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:21:43.303997    3690 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:21:43.304011    3690 cache.go:56] Caching tarball of preloaded images
	I0827 15:21:43.304097    3690 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:21:43.304112    3690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:21:43.304195    3690 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/force-systemd-flag-671000/config.json ...
	I0827 15:21:43.304214    3690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/force-systemd-flag-671000/config.json: {Name:mkf5e60248c844dc379f5d443418d1780e9f5f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:21:43.304454    3690 start.go:360] acquireMachinesLock for force-systemd-flag-671000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:43.304493    3690 start.go:364] duration metric: took 31.208µs to acquireMachinesLock for "force-systemd-flag-671000"
	I0827 15:21:43.304506    3690 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:43.304543    3690 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:43.312955    3690 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:43.332060    3690 start.go:159] libmachine.API.Create for "force-systemd-flag-671000" (driver="qemu2")
	I0827 15:21:43.332095    3690 client.go:168] LocalClient.Create starting
	I0827 15:21:43.332159    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:43.332190    3690 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:43.332199    3690 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:43.332235    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:43.332259    3690 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:43.332269    3690 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:43.332645    3690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:43.486251    3690 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:43.762178    3690 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:43.762186    3690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:43.762475    3690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2
	I0827 15:21:43.772225    3690 main.go:141] libmachine: STDOUT: 
	I0827 15:21:43.772249    3690 main.go:141] libmachine: STDERR: 
	I0827 15:21:43.772294    3690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2 +20000M
	I0827 15:21:43.780429    3690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:43.780450    3690 main.go:141] libmachine: STDERR: 
	I0827 15:21:43.780463    3690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2
	I0827 15:21:43.780476    3690 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:43.780491    3690 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:43.780517    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:01:87:96:5e:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2
	I0827 15:21:43.782140    3690 main.go:141] libmachine: STDOUT: 
	I0827 15:21:43.782159    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:43.782178    3690 client.go:171] duration metric: took 450.092583ms to LocalClient.Create
	I0827 15:21:45.784283    3690 start.go:128] duration metric: took 2.479800959s to createHost
	I0827 15:21:45.784331    3690 start.go:83] releasing machines lock for "force-systemd-flag-671000", held for 2.479911167s
	W0827 15:21:45.784425    3690 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:45.807436    3690 out.go:177] * Deleting "force-systemd-flag-671000" in qemu2 ...
	W0827 15:21:45.832142    3690 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:45.832163    3690 start.go:729] Will try again in 5 seconds ...
	I0827 15:21:50.834214    3690 start.go:360] acquireMachinesLock for force-systemd-flag-671000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:50.858724    3690 start.go:364] duration metric: took 24.37825ms to acquireMachinesLock for "force-systemd-flag-671000"
	I0827 15:21:50.858894    3690 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:50.859158    3690 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:50.867603    3690 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:50.916874    3690 start.go:159] libmachine.API.Create for "force-systemd-flag-671000" (driver="qemu2")
	I0827 15:21:50.916922    3690 client.go:168] LocalClient.Create starting
	I0827 15:21:50.917044    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:50.917109    3690 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:50.917128    3690 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:50.917184    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:50.917229    3690 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:50.917242    3690 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:50.917824    3690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:51.084519    3690 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:51.174222    3690 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:51.174233    3690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:51.174431    3690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2
	I0827 15:21:51.183641    3690 main.go:141] libmachine: STDOUT: 
	I0827 15:21:51.183662    3690 main.go:141] libmachine: STDERR: 
	I0827 15:21:51.183723    3690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2 +20000M
	I0827 15:21:51.191554    3690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:51.191569    3690 main.go:141] libmachine: STDERR: 
	I0827 15:21:51.191582    3690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2
	I0827 15:21:51.191588    3690 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:51.191607    3690 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:51.191637    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e5:3f:75:89:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-flag-671000/disk.qcow2
	I0827 15:21:51.193268    3690 main.go:141] libmachine: STDOUT: 
	I0827 15:21:51.193284    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:51.193298    3690 client.go:171] duration metric: took 276.380958ms to LocalClient.Create
	I0827 15:21:53.195416    3690 start.go:128] duration metric: took 2.336298792s to createHost
	I0827 15:21:53.195542    3690 start.go:83] releasing machines lock for "force-systemd-flag-671000", held for 2.336797833s
	W0827 15:21:53.196002    3690 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:53.209551    3690 out.go:201] 
	W0827 15:21:53.221894    3690 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:21:53.221949    3690 out.go:270] * 
	* 
	W0827 15:21:53.224545    3690 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:21:53.237621    3690 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-671000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-671000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-671000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.626875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-671000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-671000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-671000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-27 15:21:53.330871 -0700 PDT m=+2722.011168835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-671000 -n force-systemd-flag-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-671000 -n force-systemd-flag-671000: exit status 7 (34.301916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-671000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-671000
--- FAIL: TestForceSystemdFlag (10.26s)

                                                
                                    
x
+
TestForceSystemdEnv (11.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-232000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-232000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.907387167s)

                                                
                                                
-- stdout --
	* [force-systemd-env-232000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-232000" primary control-plane node in "force-systemd-env-232000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-232000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:21:37.338727    3656 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:21:37.338870    3656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:37.338873    3656 out.go:358] Setting ErrFile to fd 2...
	I0827 15:21:37.338875    3656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:21:37.339014    3656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:21:37.340091    3656 out.go:352] Setting JSON to false
	I0827 15:21:37.356669    3656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3062,"bootTime":1724794235,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:21:37.356736    3656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:21:37.362477    3656 out.go:177] * [force-systemd-env-232000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:21:37.370371    3656 notify.go:220] Checking for updates...
	I0827 15:21:37.374332    3656 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:21:37.382272    3656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:21:37.391412    3656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:21:37.394403    3656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:21:37.397387    3656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:21:37.401404    3656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0827 15:21:37.404713    3656 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:21:37.404762    3656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:21:37.409366    3656 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:21:37.415363    3656 start.go:297] selected driver: qemu2
	I0827 15:21:37.415371    3656 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:21:37.415377    3656 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:21:37.417413    3656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:21:37.427379    3656 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:21:37.434365    3656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 15:21:37.434391    3656 cni.go:84] Creating CNI manager for ""
	I0827 15:21:37.434397    3656 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:21:37.434400    3656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:21:37.434427    3656 start.go:340] cluster config:
	{Name:force-systemd-env-232000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:21:37.437847    3656 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:21:37.451306    3656 out.go:177] * Starting "force-systemd-env-232000" primary control-plane node in "force-systemd-env-232000" cluster
	I0827 15:21:37.455406    3656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:21:37.455456    3656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:21:37.455474    3656 cache.go:56] Caching tarball of preloaded images
	I0827 15:21:37.455604    3656 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:21:37.455628    3656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:21:37.455713    3656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/force-systemd-env-232000/config.json ...
	I0827 15:21:37.455724    3656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/force-systemd-env-232000/config.json: {Name:mk4a350fb9a2e650d5655228c49d346e7f0781c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:21:37.455923    3656 start.go:360] acquireMachinesLock for force-systemd-env-232000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:37.455954    3656 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "force-systemd-env-232000"
	I0827 15:21:37.455965    3656 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:37.455998    3656 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:37.460329    3656 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:37.475891    3656 start.go:159] libmachine.API.Create for "force-systemd-env-232000" (driver="qemu2")
	I0827 15:21:37.475923    3656 client.go:168] LocalClient.Create starting
	I0827 15:21:37.475998    3656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:37.476034    3656 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:37.476042    3656 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:37.476078    3656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:37.476102    3656 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:37.476111    3656 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:37.476450    3656 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:37.629691    3656 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:37.668599    3656 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:37.668611    3656 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:37.668856    3656 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0827 15:21:37.678308    3656 main.go:141] libmachine: STDOUT: 
	I0827 15:21:37.678328    3656 main.go:141] libmachine: STDERR: 
	I0827 15:21:37.678389    3656 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2 +20000M
	I0827 15:21:37.686601    3656 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:37.686628    3656 main.go:141] libmachine: STDERR: 
	I0827 15:21:37.686641    3656 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0827 15:21:37.686646    3656 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:37.686663    3656 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:37.686694    3656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:d9:80:a0:d6:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0827 15:21:37.688349    3656 main.go:141] libmachine: STDOUT: 
	I0827 15:21:37.688370    3656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:37.688387    3656 client.go:171] duration metric: took 212.467708ms to LocalClient.Create
	I0827 15:21:39.690534    3656 start.go:128] duration metric: took 2.234580542s to createHost
	I0827 15:21:39.690631    3656 start.go:83] releasing machines lock for "force-systemd-env-232000", held for 2.234739542s
	W0827 15:21:39.690697    3656 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:39.697717    3656 out.go:177] * Deleting "force-systemd-env-232000" in qemu2 ...
	W0827 15:21:39.726059    3656 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:39.726094    3656 start.go:729] Will try again in 5 seconds ...
	I0827 15:21:44.728111    3656 start.go:360] acquireMachinesLock for force-systemd-env-232000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:45.784485    3656 start.go:364] duration metric: took 1.056243209s to acquireMachinesLock for "force-systemd-env-232000"
	I0827 15:21:45.784574    3656 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:45.784796    3656 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:45.798436    3656 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0827 15:21:45.849271    3656 start.go:159] libmachine.API.Create for "force-systemd-env-232000" (driver="qemu2")
	I0827 15:21:45.849322    3656 client.go:168] LocalClient.Create starting
	I0827 15:21:45.849445    3656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:45.849501    3656 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:45.849515    3656 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:45.849584    3656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:45.849628    3656 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:45.849640    3656 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:45.850085    3656 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:46.024368    3656 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:46.143565    3656 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:46.143570    3656 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:46.143772    3656 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0827 15:21:46.153197    3656 main.go:141] libmachine: STDOUT: 
	I0827 15:21:46.153222    3656 main.go:141] libmachine: STDERR: 
	I0827 15:21:46.153287    3656 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2 +20000M
	I0827 15:21:46.161283    3656 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:46.161298    3656 main.go:141] libmachine: STDERR: 
	I0827 15:21:46.161308    3656 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0827 15:21:46.161313    3656 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:46.161319    3656 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:46.161354    3656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:6d:3f:c7:10:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0827 15:21:46.163020    3656 main.go:141] libmachine: STDOUT: 
	I0827 15:21:46.163059    3656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:46.163071    3656 client.go:171] duration metric: took 313.752791ms to LocalClient.Create
	I0827 15:21:48.165189    3656 start.go:128] duration metric: took 2.3804285s to createHost
	I0827 15:21:48.165259    3656 start.go:83] releasing machines lock for "force-systemd-env-232000", held for 2.380803708s
	W0827 15:21:48.165609    3656 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:48.179426    3656 out.go:201] 
	W0827 15:21:48.191310    3656 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:21:48.191337    3656 out.go:270] * 
	* 
	W0827 15:21:48.194061    3656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:21:48.201886    3656 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-232000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-232000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-232000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.523625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-232000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-232000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-232000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-27 15:21:48.295365 -0700 PDT m=+2716.975497251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-232000 -n force-systemd-env-232000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-232000 -n force-systemd-env-232000: exit status 7 (35.398458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-232000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-232000
--- FAIL: TestForceSystemdEnv (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-289000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-289000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-qv4zc" [0d27c5f3-0fa5-4b0f-b6f2-5de79ddea215] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-qv4zc" [0d27c5f3-0fa5-4b0f-b6f2-5de79ddea215] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.008179041s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30471
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30471: Get "http://192.168.105.4:30471": dial tcp 192.168.105.4:30471: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-289000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-qv4zc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-289000/192.168.105.4
Start Time:       Tue, 27 Aug 2024 14:45:55 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://740ee964c3a6e390a7ee22e09edc69d485e2a502581b7de4d34a4b45733cb3a5
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 27 Aug 2024 14:46:17 -0700
Finished:     Tue, 27 Aug 2024 14:46:17 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-db7xc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-db7xc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-qv4zc to functional-289000
Normal   Pulling    35s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.059s (4.059s including waiting). Image size: 84957542 bytes.
Normal   Created    13s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 30s)  kubelet            Started container echoserver-arm
Normal   Pulled     13s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    1s (x4 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-qv4zc_default(0d27c5f3-0fa5-4b0f-b6f2-5de79ddea215)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-289000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-289000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.249.163
IPs:                      10.107.249.163
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30471/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-289000 -n functional-289000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh -- ls                                                                                          | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh cat                                                                                            | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | /mount-9p/test-1724795178081810000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh stat                                                                                           | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh stat                                                                                           | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh sudo                                                                                           | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1607683484/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh -- ls                                                                                          | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh sudo                                                                                           | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-289000 ssh findmnt                                                                                        | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT | 27 Aug 24 14:46 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-289000                                                                                                 | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-289000 --dry-run                                                                                       | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-289000 | jenkins | v1.33.1 | 27 Aug 24 14:46 PDT |                     |
	|           | -p functional-289000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 14:46:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 14:46:28.402319    2094 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:46:28.402471    2094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:46:28.402474    2094 out.go:358] Setting ErrFile to fd 2...
	I0827 14:46:28.402476    2094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:46:28.402618    2094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 14:46:28.403593    2094 out.go:352] Setting JSON to false
	I0827 14:46:28.420194    2094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":953,"bootTime":1724794235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 14:46:28.420253    2094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 14:46:28.425264    2094 out.go:177] * [functional-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 14:46:28.432268    2094 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 14:46:28.432333    2094 notify.go:220] Checking for updates...
	I0827 14:46:28.439226    2094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 14:46:28.443179    2094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 14:46:28.446250    2094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 14:46:28.449246    2094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 14:46:28.452282    2094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 14:46:28.455570    2094 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 14:46:28.455817    2094 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 14:46:28.460197    2094 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 14:46:28.467269    2094 start.go:297] selected driver: qemu2
	I0827 14:46:28.467279    2094 start.go:901] validating driver "qemu2" against &{Name:functional-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:46:28.467378    2094 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 14:46:28.469660    2094 cni.go:84] Creating CNI manager for ""
	I0827 14:46:28.469679    2094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 14:46:28.469720    2094 start.go:340] cluster config:
	{Name:functional-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-289000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:46:28.480249    2094 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 27 21:46:22 functional-289000 dockerd[5731]: time="2024-08-27T21:46:22.332901576Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.080122152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.080154985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.080162943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.080198401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 27 21:46:24 functional-289000 dockerd[5724]: time="2024-08-27T21:46:24.105535558Z" level=info msg="ignoring event" container=a8088bea2de77e44781fa7d43c2f5283c884fcf8bd8eb76eb81e8e917507f448 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.105649558Z" level=info msg="shim disconnected" id=a8088bea2de77e44781fa7d43c2f5283c884fcf8bd8eb76eb81e8e917507f448 namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.105675557Z" level=warning msg="cleaning up after shim disconnected" id=a8088bea2de77e44781fa7d43c2f5283c884fcf8bd8eb76eb81e8e917507f448 namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.105679682Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.109622216Z" level=warning msg="cleanup warnings time=\"2024-08-27T21:46:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5724]: time="2024-08-27T21:46:24.118743947Z" level=info msg="ignoring event" container=a924d4879418109390554292558aa1de3ef1ebff5a2a00f4f10732465de3083f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.119031030Z" level=info msg="shim disconnected" id=a924d4879418109390554292558aa1de3ef1ebff5a2a00f4f10732465de3083f namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.119094322Z" level=warning msg="cleaning up after shim disconnected" id=a924d4879418109390554292558aa1de3ef1ebff5a2a00f4f10732465de3083f namespace=moby
	Aug 27 21:46:24 functional-289000 dockerd[5731]: time="2024-08-27T21:46:24.119111822Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.320370912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.320589661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.320638869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.320740161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.322397030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.322423072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.322430280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 27 21:46:29 functional-289000 dockerd[5731]: time="2024-08-27T21:46:29.322703738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 27 21:46:29 functional-289000 cri-dockerd[5987]: time="2024-08-27T21:46:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2690a7f252a4a9a8a866ec747029c903ab47b620c91eea5154fae30fd1604d26/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 27 21:46:29 functional-289000 cri-dockerd[5987]: time="2024-08-27T21:46:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/acc29ebda3307f11e85aa32a5409e29f5bcd690990906edb5045fba3cb17c1dc/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 27 21:46:29 functional-289000 dockerd[5724]: time="2024-08-27T21:46:29.627033379Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a8088bea2de77       72565bf5bbedf                                                                                         7 seconds ago        Exited              echoserver-arm            2                   28286cc81dd5e       hello-node-64b4f8f9ff-bt96n
	925dd970fd5a5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   a924d48794181       busybox-mount
	740ee964c3a6e       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   bdf9cbbccff11       hello-node-connect-65d86f57f4-qv4zc
	4e97803ae7e66       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         29 seconds ago       Running             myfrontend                0                   94543ab5742b3       sp-pod
	31b73f69341e8       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         43 seconds ago       Running             nginx                     0                   f2415f5ca8e7f       nginx-svc
	9c89fe16aa63d       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   2a6599f3fee8c       coredns-6f6b679f8f-r8vb2
	f0b154ee21cb7       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   855db1b53cf44       kube-proxy-b4b5q
	f713378694e90       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   ebb2dd9d9b9b6       storage-provisioner
	8bdb9b91eea45       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   cc2dac2ef53d0       kube-scheduler-functional-289000
	8d1a4f04d540f       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   3c120477e9916       etcd-functional-289000
	24226f3b83e58       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   83f3339f45d0b       kube-controller-manager-functional-289000
	8b69371817216       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   bd809c9a480c4       kube-apiserver-functional-289000
	5a7648a1cb3a6       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   1ec75a876396b       coredns-6f6b679f8f-r8vb2
	954cf65e51560       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   bb77184334c33       storage-provisioner
	9cb358398d2ba       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   e36888a5be01c       kube-proxy-b4b5q
	49e603e154251       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   5e02e41708da0       etcd-functional-289000
	6dbac468f9425       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   35bc9bc35cebe       kube-controller-manager-functional-289000
	2469d6b134235       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   8303f98abde26       kube-scheduler-functional-289000
	
	
	==> coredns [5a7648a1cb3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48784 - 57033 "HINFO IN 7327954645585112750.7639267996402648852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009708116s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9c89fe16aa63] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51441 - 59027 "HINFO IN 3839803346198230065.5410681776241668427. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011339371s
	[INFO] 10.244.0.1:32202 - 28694 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000100249s
	[INFO] 10.244.0.1:24300 - 31483 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000085167s
	[INFO] 10.244.0.1:12779 - 24337 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036s
	[INFO] 10.244.0.1:36262 - 29101 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001192456s
	[INFO] 10.244.0.1:31757 - 29374 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000121041s
	[INFO] 10.244.0.1:65448 - 260 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000371999s
	
	
	==> describe nodes <==
	Name:               functional-289000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-289000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=functional-289000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T14_44_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 21:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-289000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 21:46:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 21:46:26 +0000   Tue, 27 Aug 2024 21:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 21:46:26 +0000   Tue, 27 Aug 2024 21:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 21:46:26 +0000   Tue, 27 Aug 2024 21:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 21:46:26 +0000   Tue, 27 Aug 2024 21:44:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-289000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 09a98225260f488e92a46f15735cc95c
	  System UUID:                09a98225260f488e92a46f15735cc95c
	  Boot ID:                    4828d573-2692-450a-bb53-eddc86274c75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-bt96n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     hello-node-connect-65d86f57f4-qv4zc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 coredns-6f6b679f8f-r8vb2                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m22s
	  kube-system                 etcd-functional-289000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m28s
	  kube-system                 kube-apiserver-functional-289000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-functional-289000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-b4b5q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-functional-289000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-j6r4x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-pxtmg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m21s                kube-proxy       
	  Normal  Starting                 64s                  kube-proxy       
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m28s                kubelet          Node functional-289000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m28s                kubelet          Node functional-289000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s                kubelet          Node functional-289000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m28s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m24s                kubelet          Node functional-289000 status is now: NodeReady
	  Normal  RegisteredNode           2m23s                node-controller  Node functional-289000 event: Registered Node functional-289000 in Controller
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node functional-289000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node functional-289000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet          Node functional-289000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                 node-controller  Node functional-289000 event: Registered Node functional-289000 in Controller
	  Normal  Starting                 70s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)    kubelet          Node functional-289000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)    kubelet          Node functional-289000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)    kubelet          Node functional-289000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                  node-controller  Node functional-289000 event: Registered Node functional-289000 in Controller
	
	
	==> dmesg <==
	[  +8.585575] systemd-fstab-generator[4795]: Ignoring "noauto" option for root device
	[Aug27 21:45] systemd-fstab-generator[5243]: Ignoring "noauto" option for root device
	[  +0.053471] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.100366] systemd-fstab-generator[5278]: Ignoring "noauto" option for root device
	[  +0.092098] systemd-fstab-generator[5290]: Ignoring "noauto" option for root device
	[  +0.113393] systemd-fstab-generator[5304]: Ignoring "noauto" option for root device
	[  +5.111834] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.292229] systemd-fstab-generator[5936]: Ignoring "noauto" option for root device
	[  +0.070814] systemd-fstab-generator[5948]: Ignoring "noauto" option for root device
	[  +0.085925] systemd-fstab-generator[5960]: Ignoring "noauto" option for root device
	[  +0.079453] systemd-fstab-generator[5975]: Ignoring "noauto" option for root device
	[  +0.210824] systemd-fstab-generator[6148]: Ignoring "noauto" option for root device
	[  +0.839310] systemd-fstab-generator[6271]: Ignoring "noauto" option for root device
	[  +4.957771] kauditd_printk_skb: 199 callbacks suppressed
	[  +8.105522] systemd-fstab-generator[7265]: Ignoring "noauto" option for root device
	[  +0.054463] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.284760] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.189214] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.010124] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.107072] kauditd_printk_skb: 2 callbacks suppressed
	[Aug27 21:46] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.224624] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.216283] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.260782] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.685364] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [49e603e15425] <==
	{"level":"info","ts":"2024-08-27T21:44:35.822614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T21:44:35.822667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-27T21:44:35.822701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T21:44:35.822717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-27T21:44:35.822743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T21:44:35.822779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-27T21:44:35.827148Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-289000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T21:44:35.827168Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T21:44:35.827591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T21:44:35.827847Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T21:44:35.827982Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T21:44:35.829352Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T21:44:35.829352Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T21:44:35.831617Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-27T21:44:35.831750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T21:45:08.289492Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-27T21:45:08.289524Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-289000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-27T21:45:08.289562Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T21:45:08.289610Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T21:45:08.295243Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T21:45:08.295260Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T21:45:08.295280Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-27T21:45:08.300293Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-27T21:45:08.300348Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-27T21:45:08.300353Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-289000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [8d1a4f04d540] <==
	{"level":"info","ts":"2024-08-27T21:45:22.773117Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-27T21:45:22.773147Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T21:45:22.773158Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T21:45:22.775431Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T21:45:22.777276Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T21:45:22.777407Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-27T21:45:22.777411Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-27T21:45:22.778799Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T21:45:22.778808Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T21:45:24.463518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-27T21:45:24.463662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-27T21:45:24.463722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-27T21:45:24.463758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-27T21:45:24.463774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-27T21:45:24.463805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-27T21:45:24.463830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-27T21:45:24.470584Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-289000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T21:45:24.470998Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T21:45:24.471100Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T21:45:24.471141Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T21:45:24.471169Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T21:45:24.472873Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T21:45:24.472887Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T21:45:24.474485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-27T21:45:24.474761Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:46:31 up 2 min,  0 users,  load average: 0.70, 0.45, 0.18
	Linux functional-289000 5.10.207 #1 SMP PREEMPT Mon Aug 26 18:57:20 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8b6937181721] <==
	I0827 21:45:25.072750       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 21:45:25.072768       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 21:45:25.072783       1 cache.go:39] Caches are synced for autoregister controller
	I0827 21:45:25.073947       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 21:45:25.112938       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 21:45:25.112985       1 policy_source.go:224] refreshing policies
	I0827 21:45:25.112940       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0827 21:45:25.119160       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 21:45:25.972195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0827 21:45:26.075203       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0827 21:45:26.075753       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 21:45:26.077307       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 21:45:26.628835       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 21:45:26.632551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 21:45:26.642955       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 21:45:26.649692       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 21:45:26.651605       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0827 21:45:39.932912       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.248.235"}
	I0827 21:45:45.142653       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.157.183"}
	I0827 21:45:55.558205       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0827 21:45:55.600736       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.249.163"}
	I0827 21:46:10.826446       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.255.157"}
	I0827 21:46:28.905587       1 controller.go:615] quota admission added evaluator for: namespaces
	I0827 21:46:28.988313       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.153.142"}
	I0827 21:46:29.001475       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.108.168"}
	
	
	==> kube-controller-manager [24226f3b83e5] <==
	I0827 21:46:10.802373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="15.417µs"
	I0827 21:46:11.787413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="143.125µs"
	I0827 21:46:12.821037       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="41.959µs"
	I0827 21:46:17.906039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="35.75µs"
	I0827 21:46:24.030378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="46.625µs"
	I0827 21:46:25.029401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="30.25µs"
	I0827 21:46:26.450444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-289000"
	I0827 21:46:28.941022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.728096ms"
	E0827 21:46:28.941045       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0827 21:46:28.942553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="18.922353ms"
	E0827 21:46:28.942570       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0827 21:46:28.947146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.744109ms"
	E0827 21:46:28.947167       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0827 21:46:28.947198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.397072ms"
	E0827 21:46:28.947205       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0827 21:46:28.950886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.433866ms"
	E0827 21:46:28.950902       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0827 21:46:28.956000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.678519ms"
	I0827 21:46:28.973741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="13.70537ms"
	I0827 21:46:28.974191       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="18.17473ms"
	I0827 21:46:28.974236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="33.375µs"
	I0827 21:46:28.974280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.708µs"
	I0827 21:46:28.985958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="12.186792ms"
	I0827 21:46:28.986096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="20.208µs"
	I0827 21:46:29.014891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.833µs"
	
	
	==> kube-controller-manager [6dbac468f942] <==
	I0827 21:44:39.712344       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0827 21:44:39.713702       1 shared_informer.go:320] Caches are synced for persistent volume
	I0827 21:44:39.713831       1 shared_informer.go:320] Caches are synced for endpoint
	I0827 21:44:39.713910       1 shared_informer.go:320] Caches are synced for crt configmap
	I0827 21:44:39.713993       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0827 21:44:39.714046       1 shared_informer.go:320] Caches are synced for job
	I0827 21:44:39.715685       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0827 21:44:39.715786       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0827 21:44:39.715738       1 shared_informer.go:320] Caches are synced for GC
	I0827 21:44:39.715741       1 shared_informer.go:320] Caches are synced for PVC protection
	I0827 21:44:39.715745       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0827 21:44:39.786346       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0827 21:44:39.865397       1 shared_informer.go:320] Caches are synced for attach detach
	I0827 21:44:39.868389       1 shared_informer.go:320] Caches are synced for deployment
	I0827 21:44:39.914421       1 shared_informer.go:320] Caches are synced for disruption
	I0827 21:44:39.917088       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 21:44:39.917625       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 21:44:39.967799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="251.86661ms"
	I0827 21:44:39.967907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="62.666µs"
	I0827 21:44:40.334745       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 21:44:40.390621       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 21:44:40.390856       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0827 21:44:44.668733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.389879ms"
	I0827 21:44:44.668974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="202.705µs"
	I0827 21:45:06.876361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-289000"
	
	
	==> kube-proxy [9cb358398d2b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 21:44:37.313107       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 21:44:37.318043       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0827 21:44:37.318787       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 21:44:37.331486       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 21:44:37.331501       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 21:44:37.331514       1 server_linux.go:169] "Using iptables Proxier"
	I0827 21:44:37.332165       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 21:44:37.332279       1 server.go:483] "Version info" version="v1.31.0"
	I0827 21:44:37.332284       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 21:44:37.332673       1 config.go:197] "Starting service config controller"
	I0827 21:44:37.332689       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 21:44:37.332698       1 config.go:104] "Starting endpoint slice config controller"
	I0827 21:44:37.332699       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 21:44:37.332890       1 config.go:326] "Starting node config controller"
	I0827 21:44:37.332893       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 21:44:37.433148       1 shared_informer.go:320] Caches are synced for node config
	I0827 21:44:37.433170       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 21:44:37.433148       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f0b154ee21cb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 21:45:27.062460       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 21:45:27.069047       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0827 21:45:27.069127       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 21:45:27.093219       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 21:45:27.093239       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 21:45:27.093253       1 server_linux.go:169] "Using iptables Proxier"
	I0827 21:45:27.093878       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 21:45:27.093979       1 server.go:483] "Version info" version="v1.31.0"
	I0827 21:45:27.093988       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 21:45:27.094415       1 config.go:197] "Starting service config controller"
	I0827 21:45:27.094429       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 21:45:27.094451       1 config.go:104] "Starting endpoint slice config controller"
	I0827 21:45:27.094457       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 21:45:27.094658       1 config.go:326] "Starting node config controller"
	I0827 21:45:27.094664       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 21:45:27.194893       1 shared_informer.go:320] Caches are synced for service config
	I0827 21:45:27.194893       1 shared_informer.go:320] Caches are synced for node config
	I0827 21:45:27.194905       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2469d6b13423] <==
	I0827 21:44:34.881590       1 serving.go:386] Generated self-signed cert in-memory
	W0827 21:44:36.355256       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 21:44:36.355375       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 21:44:36.355397       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 21:44:36.355416       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 21:44:36.369869       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 21:44:36.369884       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 21:44:36.370761       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 21:44:36.370806       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 21:44:36.370817       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 21:44:36.370824       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 21:44:36.471785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 21:45:08.300910       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0827 21:45:08.300939       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0827 21:45:08.301099       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8bdb9b91eea4] <==
	I0827 21:45:23.508423       1 serving.go:386] Generated self-signed cert in-memory
	W0827 21:45:24.988408       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 21:45:24.988425       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 21:45:24.988430       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 21:45:24.988434       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 21:45:25.032753       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 21:45:25.032849       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 21:45:25.033793       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 21:45:25.033868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 21:45:25.033913       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 21:45:25.034384       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0827 21:45:25.041574       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 21:45:25.042030       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 21:45:26.134679       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 21:46:22 functional-289000 kubelet[6278]: E0827 21:46:22.020773    6278 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 21:46:22 functional-289000 kubelet[6278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 21:46:22 functional-289000 kubelet[6278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 21:46:22 functional-289000 kubelet[6278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 21:46:22 functional-289000 kubelet[6278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 21:46:22 functional-289000 kubelet[6278]: I0827 21:46:22.092571    6278 scope.go:117] "RemoveContainer" containerID="fa627da83d69c1cdc7eed084ff64d7b462b017fee6a2659be4818ed83fa4eed0"
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.011374    6278 scope.go:117] "RemoveContainer" containerID="8b7f64557e3f645364c123ddcbf23de4c9b8e73e41653a5b8d57a9b89a9bf43c"
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.196146    6278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/97763ca7-64c9-4089-9b20-7460752f064f-test-volume\") pod \"97763ca7-64c9-4089-9b20-7460752f064f\" (UID: \"97763ca7-64c9-4089-9b20-7460752f064f\") "
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.196188    6278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9skz\" (UniqueName: \"kubernetes.io/projected/97763ca7-64c9-4089-9b20-7460752f064f-kube-api-access-s9skz\") pod \"97763ca7-64c9-4089-9b20-7460752f064f\" (UID: \"97763ca7-64c9-4089-9b20-7460752f064f\") "
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.196188    6278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97763ca7-64c9-4089-9b20-7460752f064f-test-volume" (OuterVolumeSpecName: "test-volume") pod "97763ca7-64c9-4089-9b20-7460752f064f" (UID: "97763ca7-64c9-4089-9b20-7460752f064f"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.196213    6278 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/97763ca7-64c9-4089-9b20-7460752f064f-test-volume\") on node \"functional-289000\" DevicePath \"\""
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.196910    6278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97763ca7-64c9-4089-9b20-7460752f064f-kube-api-access-s9skz" (OuterVolumeSpecName: "kube-api-access-s9skz") pod "97763ca7-64c9-4089-9b20-7460752f064f" (UID: "97763ca7-64c9-4089-9b20-7460752f064f"). InnerVolumeSpecName "kube-api-access-s9skz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 27 21:46:24 functional-289000 kubelet[6278]: I0827 21:46:24.297051    6278 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s9skz\" (UniqueName: \"kubernetes.io/projected/97763ca7-64c9-4089-9b20-7460752f064f-kube-api-access-s9skz\") on node \"functional-289000\" DevicePath \"\""
	Aug 27 21:46:25 functional-289000 kubelet[6278]: I0827 21:46:25.020076    6278 scope.go:117] "RemoveContainer" containerID="8b7f64557e3f645364c123ddcbf23de4c9b8e73e41653a5b8d57a9b89a9bf43c"
	Aug 27 21:46:25 functional-289000 kubelet[6278]: I0827 21:46:25.020306    6278 scope.go:117] "RemoveContainer" containerID="a8088bea2de77e44781fa7d43c2f5283c884fcf8bd8eb76eb81e8e917507f448"
	Aug 27 21:46:25 functional-289000 kubelet[6278]: E0827 21:46:25.020405    6278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-bt96n_default(f0005e4a-7fb0-4908-ad12-5a2f58999d3c)\"" pod="default/hello-node-64b4f8f9ff-bt96n" podUID="f0005e4a-7fb0-4908-ad12-5a2f58999d3c"
	Aug 27 21:46:25 functional-289000 kubelet[6278]: I0827 21:46:25.031458    6278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a924d4879418109390554292558aa1de3ef1ebff5a2a00f4f10732465de3083f"
	Aug 27 21:46:28 functional-289000 kubelet[6278]: E0827 21:46:28.956331    6278 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97763ca7-64c9-4089-9b20-7460752f064f" containerName="mount-munger"
	Aug 27 21:46:28 functional-289000 kubelet[6278]: I0827 21:46:28.956381    6278 memory_manager.go:354] "RemoveStaleState removing state" podUID="97763ca7-64c9-4089-9b20-7460752f064f" containerName="mount-munger"
	Aug 27 21:46:29 functional-289000 kubelet[6278]: I0827 21:46:29.010625    6278 scope.go:117] "RemoveContainer" containerID="740ee964c3a6e390a7ee22e09edc69d485e2a502581b7de4d34a4b45733cb3a5"
	Aug 27 21:46:29 functional-289000 kubelet[6278]: E0827 21:46:29.010708    6278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-qv4zc_default(0d27c5f3-0fa5-4b0f-b6f2-5de79ddea215)\"" pod="default/hello-node-connect-65d86f57f4-qv4zc" podUID="0d27c5f3-0fa5-4b0f-b6f2-5de79ddea215"
	Aug 27 21:46:29 functional-289000 kubelet[6278]: I0827 21:46:29.037055    6278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/49850d24-31c5-4081-bad7-8944c1a56175-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-pxtmg\" (UID: \"49850d24-31c5-4081-bad7-8944c1a56175\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-pxtmg"
	Aug 27 21:46:29 functional-289000 kubelet[6278]: I0827 21:46:29.037079    6278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8ecd6f4f-ae65-4628-9a1b-3a6f1974a676-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-j6r4x\" (UID: \"8ecd6f4f-ae65-4628-9a1b-3a6f1974a676\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-j6r4x"
	Aug 27 21:46:29 functional-289000 kubelet[6278]: I0827 21:46:29.037089    6278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2tjj\" (UniqueName: \"kubernetes.io/projected/8ecd6f4f-ae65-4628-9a1b-3a6f1974a676-kube-api-access-t2tjj\") pod \"dashboard-metrics-scraper-c5db448b4-j6r4x\" (UID: \"8ecd6f4f-ae65-4628-9a1b-3a6f1974a676\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-j6r4x"
	Aug 27 21:46:29 functional-289000 kubelet[6278]: I0827 21:46:29.037098    6278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9ts6\" (UniqueName: \"kubernetes.io/projected/49850d24-31c5-4081-bad7-8944c1a56175-kube-api-access-h9ts6\") pod \"kubernetes-dashboard-695b96c756-pxtmg\" (UID: \"49850d24-31c5-4081-bad7-8944c1a56175\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-pxtmg"
	
	
	==> storage-provisioner [954cf65e5156] <==
	I0827 21:44:37.281420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 21:44:37.287536       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 21:44:37.287703       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 21:44:54.708920       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 21:44:54.709431       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-289000_15ebe69d-e758-4918-be4d-c2bf8a8c3850!
	I0827 21:44:54.710487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43bcb0ad-d6eb-4588-aa85-c71996eac900", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-289000_15ebe69d-e758-4918-be4d-c2bf8a8c3850 became leader
	I0827 21:44:54.812819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-289000_15ebe69d-e758-4918-be4d-c2bf8a8c3850!
	
	
	==> storage-provisioner [f713378694e9] <==
	I0827 21:45:27.006791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 21:45:27.017985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 21:45:27.018035       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 21:45:44.425229       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 21:45:44.426075       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43bcb0ad-d6eb-4588-aa85-c71996eac900", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-289000_be62755c-362b-4a18-9208-33955ab322a6 became leader
	I0827 21:45:44.426238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-289000_be62755c-362b-4a18-9208-33955ab322a6!
	I0827 21:45:44.527596       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-289000_be62755c-362b-4a18-9208-33955ab322a6!
	I0827 21:45:50.020637       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0827 21:45:50.020742       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    7eb08672-69f2-47eb-9cb5-81b5cc440a52 340 0 2024-08-27 21:44:09 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-27 21:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9e3c535b-0923-4ad8-b870-f88bc2a68686 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9e3c535b-0923-4ad8-b870-f88bc2a68686 662 0 2024-08-27 21:45:50 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-27 21:45:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-27 21:45:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0827 21:45:50.021443       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9e3c535b-0923-4ad8-b870-f88bc2a68686" provisioned
	I0827 21:45:50.021462       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0827 21:45:50.021474       1 volume_store.go:212] Trying to save persistentvolume "pvc-9e3c535b-0923-4ad8-b870-f88bc2a68686"
	I0827 21:45:50.022041       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9e3c535b-0923-4ad8-b870-f88bc2a68686", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0827 21:45:50.025196       1 volume_store.go:219] persistentvolume "pvc-9e3c535b-0923-4ad8-b870-f88bc2a68686" saved
	I0827 21:45:50.025343       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9e3c535b-0923-4ad8-b870-f88bc2a68686", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9e3c535b-0923-4ad8-b870-f88bc2a68686
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-289000 -n functional-289000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-289000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-j6r4x kubernetes-dashboard-695b96c756-pxtmg
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-289000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-j6r4x kubernetes-dashboard-695b96c756-pxtmg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-289000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-j6r4x kubernetes-dashboard-695b96c756-pxtmg: exit status 1 (40.252917ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-289000/192.168.105.4
	Start Time:       Tue, 27 Aug 2024 14:46:19 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://925dd970fd5a5965826c9f842f8df313e07db34570fca4bcd797f39b962d17e3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 27 Aug 2024 14:46:22 -0700
	      Finished:     Tue, 27 Aug 2024 14:46:22 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s9skz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s9skz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-289000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.902s (1.902s including waiting). Image size: 3547125 bytes.
	  Normal  Created    9s    kubelet            Created container mount-munger
	  Normal  Started    9s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-j6r4x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-pxtmg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-289000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-j6r4x kubernetes-dashboard-695b96c756-pxtmg: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-615000 node stop m02 -v=7 --alsologtostderr: (12.193391125s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr
E0827 14:51:25.832252    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:52:06.793449    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:53:28.716027    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:54:24.623337    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr: exit status 7 (3m45.049195291s)

                                                
                                                
-- stdout --
	ha-615000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-615000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-615000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 14:51:24.764783    2518 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:51:24.764932    2518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:51:24.764940    2518 out.go:358] Setting ErrFile to fd 2...
	I0827 14:51:24.764943    2518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:51:24.765077    2518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 14:51:24.765219    2518 out.go:352] Setting JSON to false
	I0827 14:51:24.765229    2518 mustload.go:65] Loading cluster: ha-615000
	I0827 14:51:24.765274    2518 notify.go:220] Checking for updates...
	I0827 14:51:24.765475    2518 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 14:51:24.765481    2518 status.go:255] checking status of ha-615000 ...
	I0827 14:51:24.766147    2518 status.go:330] ha-615000 host status = "Running" (err=<nil>)
	I0827 14:51:24.766155    2518 host.go:66] Checking if "ha-615000" exists ...
	I0827 14:51:24.766263    2518 host.go:66] Checking if "ha-615000" exists ...
	I0827 14:51:24.766377    2518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 14:51:24.766383    2518 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/id_rsa Username:docker}
	W0827 14:52:39.768674    2518 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0827 14:52:39.768744    2518 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0827 14:52:39.768754    2518 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0827 14:52:39.768764    2518 status.go:257] ha-615000 status: &{Name:ha-615000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 14:52:39.768773    2518 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0827 14:52:39.768776    2518 status.go:255] checking status of ha-615000-m02 ...
	I0827 14:52:39.768966    2518 status.go:330] ha-615000-m02 host status = "Stopped" (err=<nil>)
	I0827 14:52:39.768971    2518 status.go:343] host is not running, skipping remaining checks
	I0827 14:52:39.768973    2518 status.go:257] ha-615000-m02 status: &{Name:ha-615000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 14:52:39.768977    2518 status.go:255] checking status of ha-615000-m03 ...
	I0827 14:52:39.769586    2518 status.go:330] ha-615000-m03 host status = "Running" (err=<nil>)
	I0827 14:52:39.769591    2518 host.go:66] Checking if "ha-615000-m03" exists ...
	I0827 14:52:39.769680    2518 host.go:66] Checking if "ha-615000-m03" exists ...
	I0827 14:52:39.769797    2518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 14:52:39.769803    2518 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m03/id_rsa Username:docker}
	W0827 14:53:54.771041    2518 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0827 14:53:54.771094    2518 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0827 14:53:54.771103    2518 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0827 14:53:54.771106    2518 status.go:257] ha-615000-m03 status: &{Name:ha-615000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 14:53:54.771114    2518 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0827 14:53:54.771118    2518 status.go:255] checking status of ha-615000-m04 ...
	I0827 14:53:54.771817    2518 status.go:330] ha-615000-m04 host status = "Running" (err=<nil>)
	I0827 14:53:54.771826    2518 host.go:66] Checking if "ha-615000-m04" exists ...
	I0827 14:53:54.771916    2518 host.go:66] Checking if "ha-615000-m04" exists ...
	I0827 14:53:54.772039    2518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 14:53:54.772051    2518 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m04/id_rsa Username:docker}
	W0827 14:55:09.773104    2518 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0827 14:55:09.773149    2518 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0827 14:55:09.773158    2518 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0827 14:55:09.773161    2518 status.go:257] ha-615000-m04 status: &{Name:ha-615000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0827 14:55:09.773171    2518 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr": ha-615000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-615000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-615000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-615000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr": ha-615000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-615000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-615000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-615000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr": ha-615000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-615000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-615000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-615000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
E0827 14:55:44.833883    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:56:12.557698    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 3 (1m15.039767708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 14:56:24.810579    2548 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0827 14:56:24.810587    2548 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.057967917s)
ha_test.go:413: expected profile "ha-615000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-615000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-615000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-615000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
E0827 14:59:24.619917    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 3 (1m15.039659875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 15:00:09.905969    2591 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0827 15:00:09.905991    2591 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.1207485s)

                                                
                                                
-- stdout --
	* Starting "ha-615000-m02" control-plane node in "ha-615000" cluster
	* Restarting existing qemu2 VM for "ha-615000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-615000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:00:09.958956    2886 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:00:09.959246    2886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:00:09.959250    2886 out.go:358] Setting ErrFile to fd 2...
	I0827 15:00:09.959253    2886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:00:09.959405    2886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:00:09.959687    2886 mustload.go:65] Loading cluster: ha-615000
	I0827 15:00:09.959955    2886 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0827 15:00:09.960223    2886 host.go:58] "ha-615000-m02" host status: Stopped
	I0827 15:00:09.964023    2886 out.go:177] * Starting "ha-615000-m02" control-plane node in "ha-615000" cluster
	I0827 15:00:09.968772    2886 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:00:09.968787    2886 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:00:09.968797    2886 cache.go:56] Caching tarball of preloaded images
	I0827 15:00:09.968879    2886 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:00:09.968885    2886 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:00:09.968950    2886 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/ha-615000/config.json ...
	I0827 15:00:09.969391    2886 start.go:360] acquireMachinesLock for ha-615000-m02: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:00:09.969436    2886 start.go:364] duration metric: took 31µs to acquireMachinesLock for "ha-615000-m02"
	I0827 15:00:09.969447    2886 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:00:09.969452    2886 fix.go:54] fixHost starting: m02
	I0827 15:00:09.969600    2886 fix.go:112] recreateIfNeeded on ha-615000-m02: state=Stopped err=<nil>
	W0827 15:00:09.969606    2886 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:00:09.973828    2886 out.go:177] * Restarting existing qemu2 VM for "ha-615000-m02" ...
	I0827 15:00:09.977771    2886 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:00:09.977827    2886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:7d:de:c9:01:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/disk.qcow2
	I0827 15:00:09.980395    2886 main.go:141] libmachine: STDOUT: 
	I0827 15:00:09.980430    2886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:00:09.980456    2886 fix.go:56] duration metric: took 11.004667ms for fixHost
	I0827 15:00:09.980461    2886 start.go:83] releasing machines lock for "ha-615000-m02", held for 11.019416ms
	W0827 15:00:09.980471    2886 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:00:09.980495    2886 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:00:09.980500    2886 start.go:729] Will try again in 5 seconds ...
	I0827 15:00:14.982589    2886 start.go:360] acquireMachinesLock for ha-615000-m02: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:00:14.983043    2886 start.go:364] duration metric: took 381.833µs to acquireMachinesLock for "ha-615000-m02"
	I0827 15:00:14.983172    2886 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:00:14.983187    2886 fix.go:54] fixHost starting: m02
	I0827 15:00:14.983784    2886 fix.go:112] recreateIfNeeded on ha-615000-m02: state=Stopped err=<nil>
	W0827 15:00:14.983804    2886 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:00:14.988076    2886 out.go:177] * Restarting existing qemu2 VM for "ha-615000-m02" ...
	I0827 15:00:14.992121    2886 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:00:14.992283    2886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:7d:de:c9:01:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/disk.qcow2
	I0827 15:00:14.999934    2886 main.go:141] libmachine: STDOUT: 
	I0827 15:00:15.000005    2886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:00:15.000078    2886 fix.go:56] duration metric: took 16.891625ms for fixHost
	I0827 15:00:15.000095    2886 start.go:83] releasing machines lock for "ha-615000-m02", held for 17.0325ms
	W0827 15:00:15.000277    2886 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:00:15.007095    2886 out.go:201] 
	W0827 15:00:15.011089    2886 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:00:15.011107    2886 out.go:270] * 
	* 
	W0827 15:00:15.017954    2886 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:00:15.022485    2886 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0827 15:00:09.958956    2886 out.go:345] Setting OutFile to fd 1 ...
I0827 15:00:09.959246    2886 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 15:00:09.959250    2886 out.go:358] Setting ErrFile to fd 2...
I0827 15:00:09.959253    2886 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 15:00:09.959405    2886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 15:00:09.959687    2886 mustload.go:65] Loading cluster: ha-615000
I0827 15:00:09.959955    2886 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0827 15:00:09.960223    2886 host.go:58] "ha-615000-m02" host status: Stopped
I0827 15:00:09.964023    2886 out.go:177] * Starting "ha-615000-m02" control-plane node in "ha-615000" cluster
I0827 15:00:09.968772    2886 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0827 15:00:09.968787    2886 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0827 15:00:09.968797    2886 cache.go:56] Caching tarball of preloaded images
I0827 15:00:09.968879    2886 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0827 15:00:09.968885    2886 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0827 15:00:09.968950    2886 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/ha-615000/config.json ...
I0827 15:00:09.969391    2886 start.go:360] acquireMachinesLock for ha-615000-m02: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0827 15:00:09.969436    2886 start.go:364] duration metric: took 31µs to acquireMachinesLock for "ha-615000-m02"
I0827 15:00:09.969447    2886 start.go:96] Skipping create...Using existing machine configuration
I0827 15:00:09.969452    2886 fix.go:54] fixHost starting: m02
I0827 15:00:09.969600    2886 fix.go:112] recreateIfNeeded on ha-615000-m02: state=Stopped err=<nil>
W0827 15:00:09.969606    2886 fix.go:138] unexpected machine state, will restart: <nil>
I0827 15:00:09.973828    2886 out.go:177] * Restarting existing qemu2 VM for "ha-615000-m02" ...
I0827 15:00:09.977771    2886 qemu.go:418] Using hvf for hardware acceleration
I0827 15:00:09.977827    2886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:7d:de:c9:01:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/disk.qcow2
I0827 15:00:09.980395    2886 main.go:141] libmachine: STDOUT: 
I0827 15:00:09.980430    2886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0827 15:00:09.980456    2886 fix.go:56] duration metric: took 11.004667ms for fixHost
I0827 15:00:09.980461    2886 start.go:83] releasing machines lock for "ha-615000-m02", held for 11.019416ms
W0827 15:00:09.980471    2886 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0827 15:00:09.980495    2886 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0827 15:00:09.980500    2886 start.go:729] Will try again in 5 seconds ...
I0827 15:00:14.982589    2886 start.go:360] acquireMachinesLock for ha-615000-m02: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0827 15:00:14.983043    2886 start.go:364] duration metric: took 381.833µs to acquireMachinesLock for "ha-615000-m02"
I0827 15:00:14.983172    2886 start.go:96] Skipping create...Using existing machine configuration
I0827 15:00:14.983187    2886 fix.go:54] fixHost starting: m02
I0827 15:00:14.983784    2886 fix.go:112] recreateIfNeeded on ha-615000-m02: state=Stopped err=<nil>
W0827 15:00:14.983804    2886 fix.go:138] unexpected machine state, will restart: <nil>
I0827 15:00:14.988076    2886 out.go:177] * Restarting existing qemu2 VM for "ha-615000-m02" ...
I0827 15:00:14.992121    2886 qemu.go:418] Using hvf for hardware acceleration
I0827 15:00:14.992283    2886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:7d:de:c9:01:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/disk.qcow2
I0827 15:00:14.999934    2886 main.go:141] libmachine: STDOUT: 
I0827 15:00:15.000005    2886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0827 15:00:15.000078    2886 fix.go:56] duration metric: took 16.891625ms for fixHost
I0827 15:00:15.000095    2886 start.go:83] releasing machines lock for "ha-615000-m02", held for 17.0325ms
W0827 15:00:15.000277    2886 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0827 15:00:15.007095    2886 out.go:201] 
W0827 15:00:15.011089    2886 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0827 15:00:15.011107    2886 out.go:270] * 
* 
W0827 15:00:15.017954    2886 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0827 15:00:15.022485    2886 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-615000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr
E0827 15:00:44.831380    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 15:00:47.714130    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr: exit status 7 (3m45.079595042s)

                                                
                                                
-- stdout --
	ha-615000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-615000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-615000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:00:15.090765    2890 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:00:15.090971    2890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:00:15.090976    2890 out.go:358] Setting ErrFile to fd 2...
	I0827 15:00:15.090979    2890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:00:15.091150    2890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:00:15.091307    2890 out.go:352] Setting JSON to false
	I0827 15:00:15.091320    2890 mustload.go:65] Loading cluster: ha-615000
	I0827 15:00:15.091363    2890 notify.go:220] Checking for updates...
	I0827 15:00:15.091602    2890 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:00:15.091612    2890 status.go:255] checking status of ha-615000 ...
	I0827 15:00:15.092454    2890 status.go:330] ha-615000 host status = "Running" (err=<nil>)
	I0827 15:00:15.092464    2890 host.go:66] Checking if "ha-615000" exists ...
	I0827 15:00:15.092590    2890 host.go:66] Checking if "ha-615000" exists ...
	I0827 15:00:15.092728    2890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 15:00:15.092738    2890 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/id_rsa Username:docker}
	W0827 15:01:30.092607    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0827 15:01:30.092894    2890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0827 15:01:30.092936    2890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0827 15:01:30.092955    2890 status.go:257] ha-615000 status: &{Name:ha-615000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 15:01:30.093018    2890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0827 15:01:30.093037    2890 status.go:255] checking status of ha-615000-m02 ...
	I0827 15:01:30.094039    2890 status.go:330] ha-615000-m02 host status = "Stopped" (err=<nil>)
	I0827 15:01:30.094062    2890 status.go:343] host is not running, skipping remaining checks
	I0827 15:01:30.094078    2890 status.go:257] ha-615000-m02 status: &{Name:ha-615000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 15:01:30.094097    2890 status.go:255] checking status of ha-615000-m03 ...
	I0827 15:01:30.097917    2890 status.go:330] ha-615000-m03 host status = "Running" (err=<nil>)
	I0827 15:01:30.097951    2890 host.go:66] Checking if "ha-615000-m03" exists ...
	I0827 15:01:30.098584    2890 host.go:66] Checking if "ha-615000-m03" exists ...
	I0827 15:01:30.099213    2890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 15:01:30.099249    2890 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m03/id_rsa Username:docker}
	W0827 15:02:45.101475    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0827 15:02:45.101521    2890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0827 15:02:45.101530    2890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0827 15:02:45.101534    2890 status.go:257] ha-615000-m03 status: &{Name:ha-615000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 15:02:45.101544    2890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0827 15:02:45.101547    2890 status.go:255] checking status of ha-615000-m04 ...
	I0827 15:02:45.102588    2890 status.go:330] ha-615000-m04 host status = "Running" (err=<nil>)
	I0827 15:02:45.102597    2890 host.go:66] Checking if "ha-615000-m04" exists ...
	I0827 15:02:45.102698    2890 host.go:66] Checking if "ha-615000-m04" exists ...
	I0827 15:02:45.102819    2890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 15:02:45.102827    2890 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m04/id_rsa Username:docker}
	W0827 15:04:00.102441    2890 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0827 15:04:00.102631    2890 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0827 15:04:00.102666    2890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0827 15:04:00.102682    2890 status.go:257] ha-615000-m04 status: &{Name:ha-615000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0827 15:04:00.102714    2890 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
E0827 15:04:24.616391    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 3 (1m15.079575708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 15:05:15.182705    2926 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0827 15:05:15.182750    2926 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-615000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-615000 -v=7 --alsologtostderr
E0827 15:09:24.609890    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 15:10:44.822079    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-615000 -v=7 --alsologtostderr: (5m27.162535s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-615000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-615000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.215843417s)

                                                
                                                
-- stdout --
	* [ha-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-615000" primary control-plane node in "ha-615000" cluster
	* Restarting existing qemu2 VM for "ha-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:13:12.551096    2988 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:13:12.551253    2988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:12.551258    2988 out.go:358] Setting ErrFile to fd 2...
	I0827 15:13:12.551261    2988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:12.551421    2988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:13:12.552748    2988 out.go:352] Setting JSON to false
	I0827 15:13:12.572579    2988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2557,"bootTime":1724794235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:13:12.572649    2988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:13:12.577739    2988 out.go:177] * [ha-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:13:12.585694    2988 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:13:12.585743    2988 notify.go:220] Checking for updates...
	I0827 15:13:12.591572    2988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:13:12.595633    2988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:13:12.598650    2988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:13:12.601647    2988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:13:12.604638    2988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:13:12.607999    2988 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:13:12.608049    2988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:13:12.611630    2988 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:13:12.618681    2988 start.go:297] selected driver: qemu2
	I0827 15:13:12.618687    2988 start.go:901] validating driver "qemu2" against &{Name:ha-615000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-615000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:13:12.618759    2988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:13:12.621424    2988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:13:12.621471    2988 cni.go:84] Creating CNI manager for ""
	I0827 15:13:12.621476    2988 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0827 15:13:12.621536    2988 start.go:340] cluster config:
	{Name:ha-615000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-615000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:13:12.625500    2988 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:13:12.632692    2988 out.go:177] * Starting "ha-615000" primary control-plane node in "ha-615000" cluster
	I0827 15:13:12.636504    2988 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:13:12.636522    2988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:13:12.636534    2988 cache.go:56] Caching tarball of preloaded images
	I0827 15:13:12.636600    2988 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:13:12.636607    2988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:13:12.636703    2988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/ha-615000/config.json ...
	I0827 15:13:12.637198    2988 start.go:360] acquireMachinesLock for ha-615000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:13:12.637239    2988 start.go:364] duration metric: took 34.292µs to acquireMachinesLock for "ha-615000"
	I0827 15:13:12.637250    2988 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:13:12.637256    2988 fix.go:54] fixHost starting: 
	I0827 15:13:12.637392    2988 fix.go:112] recreateIfNeeded on ha-615000: state=Stopped err=<nil>
	W0827 15:13:12.637401    2988 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:13:12.640682    2988 out.go:177] * Restarting existing qemu2 VM for "ha-615000" ...
	I0827 15:13:12.647627    2988 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:13:12.647669    2988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:63:be:c2:d1:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/disk.qcow2
	I0827 15:13:12.649710    2988 main.go:141] libmachine: STDOUT: 
	I0827 15:13:12.649731    2988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:13:12.649758    2988 fix.go:56] duration metric: took 12.503458ms for fixHost
	I0827 15:13:12.649762    2988 start.go:83] releasing machines lock for "ha-615000", held for 12.518625ms
	W0827 15:13:12.649770    2988 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:13:12.649811    2988 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:13:12.649816    2988 start.go:729] Will try again in 5 seconds ...
	I0827 15:13:17.652038    2988 start.go:360] acquireMachinesLock for ha-615000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:13:17.652556    2988 start.go:364] duration metric: took 345.041µs to acquireMachinesLock for "ha-615000"
	I0827 15:13:17.652676    2988 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:13:17.652697    2988 fix.go:54] fixHost starting: 
	I0827 15:13:17.653443    2988 fix.go:112] recreateIfNeeded on ha-615000: state=Stopped err=<nil>
	W0827 15:13:17.653468    2988 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:13:17.659577    2988 out.go:177] * Restarting existing qemu2 VM for "ha-615000" ...
	I0827 15:13:17.666492    2988 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:13:17.666734    2988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:63:be:c2:d1:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000/disk.qcow2
	I0827 15:13:17.673328    2988 main.go:141] libmachine: STDOUT: 
	I0827 15:13:17.673378    2988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:13:17.673452    2988 fix.go:56] duration metric: took 20.75875ms for fixHost
	I0827 15:13:17.673466    2988 start.go:83] releasing machines lock for "ha-615000", held for 20.885333ms
	W0827 15:13:17.673624    2988 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:13:17.680544    2988 out.go:201] 
	W0827 15:13:17.684565    2988 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:13:17.684589    2988 out.go:270] * 
	* 
	W0827 15:13:17.687579    2988 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:13:17.695553    2988 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-615000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-615000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 7 (31.989584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.088792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-615000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-615000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:13:17.815029    3001 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:13:17.815295    3001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:17.815298    3001 out.go:358] Setting ErrFile to fd 2...
	I0827 15:13:17.815303    3001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:17.815436    3001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:13:17.815673    3001 mustload.go:65] Loading cluster: ha-615000
	I0827 15:13:17.815890    3001 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0827 15:13:17.816219    3001 out.go:270] ! The control-plane node ha-615000 host is not running (will try others): state=Stopped
	! The control-plane node ha-615000 host is not running (will try others): state=Stopped
	W0827 15:13:17.816338    3001 out.go:270] ! The control-plane node ha-615000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-615000-m02 host is not running (will try others): state=Stopped
	I0827 15:13:17.821158    3001 out.go:177] * The control-plane node ha-615000-m03 host is not running: state=Stopped
	I0827 15:13:17.824149    3001 out.go:177]   To start a cluster, run: "minikube start -p ha-615000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-615000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr: exit status 7 (30.468625ms)

                                                
                                                
-- stdout --
	ha-615000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-615000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:13:17.857518    3003 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:13:17.857663    3003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:17.857665    3003 out.go:358] Setting ErrFile to fd 2...
	I0827 15:13:17.857668    3003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:17.857787    3003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:13:17.857912    3003 out.go:352] Setting JSON to false
	I0827 15:13:17.857922    3003 mustload.go:65] Loading cluster: ha-615000
	I0827 15:13:17.857965    3003 notify.go:220] Checking for updates...
	I0827 15:13:17.858153    3003 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:13:17.858160    3003 status.go:255] checking status of ha-615000 ...
	I0827 15:13:17.858370    3003 status.go:330] ha-615000 host status = "Stopped" (err=<nil>)
	I0827 15:13:17.858373    3003 status.go:343] host is not running, skipping remaining checks
	I0827 15:13:17.858375    3003 status.go:257] ha-615000 status: &{Name:ha-615000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 15:13:17.858386    3003 status.go:255] checking status of ha-615000-m02 ...
	I0827 15:13:17.858480    3003 status.go:330] ha-615000-m02 host status = "Stopped" (err=<nil>)
	I0827 15:13:17.858482    3003 status.go:343] host is not running, skipping remaining checks
	I0827 15:13:17.858484    3003 status.go:257] ha-615000-m02 status: &{Name:ha-615000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 15:13:17.858488    3003 status.go:255] checking status of ha-615000-m03 ...
	I0827 15:13:17.858579    3003 status.go:330] ha-615000-m03 host status = "Stopped" (err=<nil>)
	I0827 15:13:17.858582    3003 status.go:343] host is not running, skipping remaining checks
	I0827 15:13:17.858583    3003 status.go:257] ha-615000-m03 status: &{Name:ha-615000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 15:13:17.858587    3003 status.go:255] checking status of ha-615000-m04 ...
	I0827 15:13:17.858685    3003 status.go:330] ha-615000-m04 host status = "Stopped" (err=<nil>)
	I0827 15:13:17.858688    3003 status.go:343] host is not running, skipping remaining checks
	I0827 15:13:17.858690    3003 status.go:257] ha-615000-m04 status: &{Name:ha-615000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 7 (30.553542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-615000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-615000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-615000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-615000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 7 (30.345209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 stop -v=7 --alsologtostderr
E0827 15:14:24.606958    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 15:15:44.817234    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 stop -v=7 --alsologtostderr: signal: killed (3m22.023435208s)

                                                
                                                
-- stdout --
	* Stopping node "ha-615000-m04"  ...
	* Stopping node "ha-615000-m03"  ...
	* Stopping node "ha-615000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:13:17.997608    3012 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:13:17.997767    3012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:17.997770    3012 out.go:358] Setting ErrFile to fd 2...
	I0827 15:13:17.997772    3012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:13:17.997890    3012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:13:17.998104    3012 out.go:352] Setting JSON to false
	I0827 15:13:17.998215    3012 mustload.go:65] Loading cluster: ha-615000
	I0827 15:13:17.998431    3012 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:13:17.998489    3012 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/ha-615000/config.json ...
	I0827 15:13:17.998747    3012 mustload.go:65] Loading cluster: ha-615000
	I0827 15:13:17.998829    3012 config.go:182] Loaded profile config "ha-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:13:17.998849    3012 stop.go:39] StopHost: ha-615000-m04
	I0827 15:13:18.003321    3012 out.go:177] * Stopping node "ha-615000-m04"  ...
	I0827 15:13:18.011135    3012 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 15:13:18.011172    3012 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 15:13:18.011183    3012 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m04/id_rsa Username:docker}
	W0827 15:14:33.013301    3012 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0827 15:14:33.013587    3012 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0827 15:14:33.013748    3012 main.go:141] libmachine: Stopping "ha-615000-m04"...
	I0827 15:14:33.013930    3012 stop.go:66] stop err: Machine "ha-615000-m04" is already stopped.
	I0827 15:14:33.013959    3012 stop.go:69] host is already stopped
	I0827 15:14:33.013986    3012 stop.go:39] StopHost: ha-615000-m03
	I0827 15:14:33.023401    3012 out.go:177] * Stopping node "ha-615000-m03"  ...
	I0827 15:14:33.027345    3012 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 15:14:33.027489    3012 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 15:14:33.027535    3012 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m03/id_rsa Username:docker}
	W0827 15:15:48.028713    3012 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0827 15:15:48.028991    3012 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0827 15:15:48.029140    3012 main.go:141] libmachine: Stopping "ha-615000-m03"...
	I0827 15:15:48.029283    3012 stop.go:66] stop err: Machine "ha-615000-m03" is already stopped.
	I0827 15:15:48.029312    3012 stop.go:69] host is already stopped
	I0827 15:15:48.029374    3012 stop.go:39] StopHost: ha-615000-m02
	I0827 15:15:48.037468    3012 out.go:177] * Stopping node "ha-615000-m02"  ...
	I0827 15:15:48.040553    3012 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 15:15:48.040720    3012 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 15:15:48.040751    3012 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/ha-615000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-615000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr: context deadline exceeded (2.709µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-615000 -n ha-615000: exit status 7 (71.458208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-410000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-410000 --driver=qemu2 : exit status 80 (9.927588958s)

                                                
                                                
-- stdout --
	* [image-410000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-410000" primary control-plane node in "image-410000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-410000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-410000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-410000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-410000 -n image-410000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-410000 -n image-410000: exit status 7 (67.791333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-410000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-598000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-598000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.751329333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"248e4a35-3521-4147-8fef-9c5b3536b463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-598000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbbae47f-d760-47d3-b628-4c3b078a39d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"160aee95-85a0-40d7-940a-deebdddad240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig"}}
	{"specversion":"1.0","id":"71734194-8fbf-44d6-aa44-d471537fc4c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"49f80905-ef76-414a-a758-1af5976e8f18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94df7658-e303-4b90-a028-c74ee23c0db9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube"}}
	{"specversion":"1.0","id":"a881f4a4-e926-4287-bda5-9ad9cafd2d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e76833ea-1f5e-4830-94cf-da64128bed38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"42bbca5a-90d3-4ada-a912-c529823589a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ee8a2cc2-b3b6-4dca-b82c-8bf3239f4d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-598000\" primary control-plane node in \"json-output-598000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd9e43dc-47e4-42e8-bb3e-df2f83cf90ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"88661cf8-845b-4c96-8446-2536a1cd046a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-598000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"af56a9a4-6fe5-4763-a3fb-8da693a096e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"10b80e66-b949-490b-9c7d-1a9cd6a5a4e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cd98903b-bc28-41a1-bec5-1b6445e31f95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-598000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5c12d99d-53c3-4593-8aa1-c0de03f02c05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"fd05058a-4ad8-4ec6-9671-5c9f61e8f735","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-598000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-598000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-598000 --output=json --user=testUser: exit status 83 (75.7845ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05c60acf-122b-4a3e-8b0c-fab96de814a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-598000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"bb2b64a9-e1ef-4486-891d-b4ef710098c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-598000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-598000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-598000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-598000 --output=json --user=testUser: exit status 83 (45.251542ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-598000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-598000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-598000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-598000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-158000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-158000 --driver=qemu2 : exit status 80 (9.937571916s)

                                                
                                                
-- stdout --
	* [first-158000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-158000" primary control-plane node in "first-158000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-158000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-158000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-158000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-27 15:17:14.163197 -0700 PDT m=+2442.763105043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-159000 -n second-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-159000 -n second-159000: exit status 85 (75.684584ms)

                                                
                                                
-- stdout --
	* Profile "second-159000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-159000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-159000" host is not running, skipping log retrieval (state="* Profile \"second-159000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-159000\"")
helpers_test.go:175: Cleaning up "second-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-159000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-27 15:17:14.355672 -0700 PDT m=+2442.955581501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-158000 -n first-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-158000 -n first-158000: exit status 7 (30.632334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-158000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-158000
--- FAIL: TestMinikubeProfile (10.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-792000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-792000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.943389416s)

                                                
                                                
-- stdout --
	* [mount-start-1-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-792000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-792000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-792000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-792000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-792000 -n mount-start-1-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-792000 -n mount-start-1-792000: exit status 7 (69.60025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-792000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0827 15:17:27.703499    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.832535417s)

                                                
                                                
-- stdout --
	* [multinode-437000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-437000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:17:24.694730    3172 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:17:24.695078    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:17:24.695084    3172 out.go:358] Setting ErrFile to fd 2...
	I0827 15:17:24.695086    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:17:24.695283    3172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:17:24.696664    3172 out.go:352] Setting JSON to false
	I0827 15:17:24.713129    3172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2809,"bootTime":1724794235,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:17:24.713205    3172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:17:24.720290    3172 out.go:177] * [multinode-437000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:17:24.728516    3172 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:17:24.728551    3172 notify.go:220] Checking for updates...
	I0827 15:17:24.735469    3172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:17:24.738448    3172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:17:24.741438    3172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:17:24.744532    3172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:17:24.747489    3172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:17:24.750644    3172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:17:24.755454    3172 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:17:24.761479    3172 start.go:297] selected driver: qemu2
	I0827 15:17:24.761485    3172 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:17:24.761494    3172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:17:24.763763    3172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:17:24.766382    3172 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:17:24.769496    3172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:17:24.769527    3172 cni.go:84] Creating CNI manager for ""
	I0827 15:17:24.769531    3172 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0827 15:17:24.769538    3172 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 15:17:24.769566    3172 start.go:340] cluster config:
	{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:17:24.773269    3172 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:17:24.780445    3172 out.go:177] * Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	I0827 15:17:24.784460    3172 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:17:24.784474    3172 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:17:24.784483    3172 cache.go:56] Caching tarball of preloaded images
	I0827 15:17:24.784542    3172 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:17:24.784547    3172 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:17:24.784749    3172 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/multinode-437000/config.json ...
	I0827 15:17:24.784761    3172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/multinode-437000/config.json: {Name:mk8f6a7e6b2b5de436687aedcbf88d79e4add838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:17:24.784992    3172 start.go:360] acquireMachinesLock for multinode-437000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:17:24.785028    3172 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "multinode-437000"
	I0827 15:17:24.785040    3172 start.go:93] Provisioning new machine with config: &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:17:24.785077    3172 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:17:24.792633    3172 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:17:24.810767    3172 start.go:159] libmachine.API.Create for "multinode-437000" (driver="qemu2")
	I0827 15:17:24.810802    3172 client.go:168] LocalClient.Create starting
	I0827 15:17:24.810878    3172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:17:24.810910    3172 main.go:141] libmachine: Decoding PEM data...
	I0827 15:17:24.810920    3172 main.go:141] libmachine: Parsing certificate...
	I0827 15:17:24.810959    3172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:17:24.810984    3172 main.go:141] libmachine: Decoding PEM data...
	I0827 15:17:24.810992    3172 main.go:141] libmachine: Parsing certificate...
	I0827 15:17:24.811361    3172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:17:24.984175    3172 main.go:141] libmachine: Creating SSH key...
	I0827 15:17:25.078969    3172 main.go:141] libmachine: Creating Disk image...
	I0827 15:17:25.078975    3172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:17:25.079179    3172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:17:25.088577    3172 main.go:141] libmachine: STDOUT: 
	I0827 15:17:25.088603    3172 main.go:141] libmachine: STDERR: 
	I0827 15:17:25.088646    3172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2 +20000M
	I0827 15:17:25.096676    3172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:17:25.096689    3172 main.go:141] libmachine: STDERR: 
	I0827 15:17:25.096703    3172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:17:25.096706    3172 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:17:25.096716    3172 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:17:25.096745    3172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:82:08:d7:f7:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:17:25.098362    3172 main.go:141] libmachine: STDOUT: 
	I0827 15:17:25.098375    3172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:17:25.098397    3172 client.go:171] duration metric: took 287.593667ms to LocalClient.Create
	I0827 15:17:27.100559    3172 start.go:128] duration metric: took 2.315489958s to createHost
	I0827 15:17:27.100624    3172 start.go:83] releasing machines lock for "multinode-437000", held for 2.315614625s
	W0827 15:17:27.100733    3172 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:17:27.114830    3172 out.go:177] * Deleting "multinode-437000" in qemu2 ...
	W0827 15:17:27.143879    3172 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:17:27.143898    3172 start.go:729] Will try again in 5 seconds ...
	I0827 15:17:32.146045    3172 start.go:360] acquireMachinesLock for multinode-437000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:17:32.146473    3172 start.go:364] duration metric: took 344.542µs to acquireMachinesLock for "multinode-437000"
	I0827 15:17:32.146591    3172 start.go:93] Provisioning new machine with config: &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:17:32.146955    3172 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:17:32.156674    3172 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:17:32.209722    3172 start.go:159] libmachine.API.Create for "multinode-437000" (driver="qemu2")
	I0827 15:17:32.209778    3172 client.go:168] LocalClient.Create starting
	I0827 15:17:32.209881    3172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:17:32.209949    3172 main.go:141] libmachine: Decoding PEM data...
	I0827 15:17:32.209969    3172 main.go:141] libmachine: Parsing certificate...
	I0827 15:17:32.210031    3172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:17:32.210076    3172 main.go:141] libmachine: Decoding PEM data...
	I0827 15:17:32.210086    3172 main.go:141] libmachine: Parsing certificate...
	I0827 15:17:32.210643    3172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:17:32.371184    3172 main.go:141] libmachine: Creating SSH key...
	I0827 15:17:32.430874    3172 main.go:141] libmachine: Creating Disk image...
	I0827 15:17:32.430880    3172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:17:32.431088    3172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:17:32.440508    3172 main.go:141] libmachine: STDOUT: 
	I0827 15:17:32.440525    3172 main.go:141] libmachine: STDERR: 
	I0827 15:17:32.440563    3172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2 +20000M
	I0827 15:17:32.448659    3172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:17:32.448672    3172 main.go:141] libmachine: STDERR: 
	I0827 15:17:32.448681    3172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:17:32.448684    3172 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:17:32.448695    3172 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:17:32.448723    3172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c7:57:35:d3:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:17:32.450409    3172 main.go:141] libmachine: STDOUT: 
	I0827 15:17:32.450423    3172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:17:32.450434    3172 client.go:171] duration metric: took 240.653958ms to LocalClient.Create
	I0827 15:17:34.452594    3172 start.go:128] duration metric: took 2.305603292s to createHost
	I0827 15:17:34.452658    3172 start.go:83] releasing machines lock for "multinode-437000", held for 2.306189084s
	W0827 15:17:34.453045    3172 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:17:34.467723    3172 out.go:201] 
	W0827 15:17:34.471769    3172 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:17:34.471793    3172 out.go:270] * 
	* 
	W0827 15:17:34.474744    3172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:17:34.485686    3172 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-437000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (70.383875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (108.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.18825ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-437000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- rollout status deployment/busybox: exit status 1 (58.522167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.182708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.528917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.241375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.153041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.545791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.961542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.308625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.511083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.732042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.535583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.489417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.888917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.546917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.905458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.075834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (30.575333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (108.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.871958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (30.933792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-437000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-437000 -v 3 --alsologtostderr: exit status 83 (45.304334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:23.232951    3257 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:23.233118    3257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.233121    3257 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:23.233123    3257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.233251    3257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:23.233489    3257 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:23.233681    3257 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:23.239827    3257 out.go:177] * The control-plane node multinode-437000 host is not running: state=Stopped
	I0827 15:19:23.242710    3257 out.go:177]   To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-437000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (29.769958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-437000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-437000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.040708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-437000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-437000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-437000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (30.029084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-437000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-437000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-437000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-437000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (29.98475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --output json --alsologtostderr: exit status 7 (30.400209ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-437000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:23.443730    3269 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:23.443859    3269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.443865    3269 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:23.443867    3269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.443995    3269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:23.444112    3269 out.go:352] Setting JSON to true
	I0827 15:19:23.444125    3269 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:23.444170    3269 notify.go:220] Checking for updates...
	I0827 15:19:23.444315    3269 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:23.444321    3269 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:23.444527    3269 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:23.444530    3269 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:23.444533    3269 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-437000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (29.906083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 node stop m03: exit status 85 (46.731083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-437000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status: exit status 7 (31.017959ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr: exit status 7 (29.954667ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:23.582230    3277 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:23.582384    3277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.582387    3277 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:23.582390    3277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.582512    3277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:23.582622    3277 out.go:352] Setting JSON to false
	I0827 15:19:23.582632    3277 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:23.582690    3277 notify.go:220] Checking for updates...
	I0827 15:19:23.582820    3277 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:23.582828    3277 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:23.583032    3277 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:23.583036    3277 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:23.583038    3277 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr": multinode-437000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (30.290958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.062875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:23.642841    3281 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:23.643073    3281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.643076    3281 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:23.643079    3281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.643226    3281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:23.643476    3281 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:23.643668    3281 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:23.647787    3281 out.go:201] 
	W0827 15:19:23.650780    3281 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0827 15:19:23.650785    3281 out.go:270] * 
	* 
	W0827 15:19:23.652442    3281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:19:23.653840    3281 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0827 15:19:23.642841    3281 out.go:345] Setting OutFile to fd 1 ...
I0827 15:19:23.643073    3281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 15:19:23.643076    3281 out.go:358] Setting ErrFile to fd 2...
I0827 15:19:23.643079    3281 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 15:19:23.643226    3281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 15:19:23.643476    3281 mustload.go:65] Loading cluster: multinode-437000
I0827 15:19:23.643668    3281 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 15:19:23.647787    3281 out.go:201] 
W0827 15:19:23.650780    3281 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0827 15:19:23.650785    3281 out.go:270] * 
* 
W0827 15:19:23.652442    3281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0827 15:19:23.653840    3281 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-437000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (29.480208ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:23.686503    3283 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:23.686656    3283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.686660    3283 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:23.686662    3283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:23.686787    3283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:23.686904    3283 out.go:352] Setting JSON to false
	I0827 15:19:23.686917    3283 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:23.686966    3283 notify.go:220] Checking for updates...
	I0827 15:19:23.687100    3283 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:23.687106    3283 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:23.687345    3283 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:23.687349    3283 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:23.687351    3283 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0827 15:19:24.603174    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (73.406292ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:24.785895    3285 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:24.786077    3285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:24.786082    3285 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:24.786085    3285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:24.786258    3285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:24.786409    3285 out.go:352] Setting JSON to false
	I0827 15:19:24.786421    3285 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:24.786460    3285 notify.go:220] Checking for updates...
	I0827 15:19:24.786679    3285 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:24.786687    3285 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:24.786978    3285 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:24.786983    3285 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:24.786986    3285 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (72.307375ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:26.170196    3287 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:26.170425    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:26.170429    3287 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:26.170432    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:26.170610    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:26.170775    3287 out.go:352] Setting JSON to false
	I0827 15:19:26.170790    3287 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:26.170831    3287 notify.go:220] Checking for updates...
	I0827 15:19:26.171123    3287 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:26.171136    3287 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:26.171436    3287 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:26.171441    3287 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:26.171444    3287 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (72.786625ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:27.788268    3289 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:27.788471    3289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:27.788476    3289 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:27.788479    3289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:27.788647    3289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:27.788817    3289 out.go:352] Setting JSON to false
	I0827 15:19:27.788830    3289 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:27.788872    3289 notify.go:220] Checking for updates...
	I0827 15:19:27.789083    3289 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:27.789091    3289 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:27.789424    3289 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:27.789430    3289 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:27.789433    3289 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (73.082208ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:32.412269    3291 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:32.412506    3291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:32.412510    3291 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:32.412513    3291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:32.412698    3291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:32.412871    3291 out.go:352] Setting JSON to false
	I0827 15:19:32.412886    3291 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:32.412917    3291 notify.go:220] Checking for updates...
	I0827 15:19:32.413166    3291 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:32.413175    3291 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:32.413446    3291 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:32.413451    3291 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:32.413454    3291 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (72.164875ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:39.096819    3293 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:39.097047    3293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:39.097052    3293 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:39.097055    3293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:39.097225    3293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:39.097401    3293 out.go:352] Setting JSON to false
	I0827 15:19:39.097415    3293 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:39.097456    3293 notify.go:220] Checking for updates...
	I0827 15:19:39.097665    3293 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:39.097673    3293 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:39.097934    3293 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:39.097939    3293 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:39.097942    3293 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (72.8825ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:47.288168    3298 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:47.288352    3298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:47.288357    3298 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:47.288360    3298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:47.288563    3298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:47.288731    3298 out.go:352] Setting JSON to false
	I0827 15:19:47.288744    3298 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:47.288781    3298 notify.go:220] Checking for updates...
	I0827 15:19:47.288994    3298 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:47.289001    3298 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:47.289318    3298 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:47.289323    3298 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:47.289326    3298 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (71.242541ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:19:55.955046    3300 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:19:55.955249    3300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:55.955254    3300 out.go:358] Setting ErrFile to fd 2...
	I0827 15:19:55.955257    3300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:19:55.955417    3300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:19:55.955559    3300 out.go:352] Setting JSON to false
	I0827 15:19:55.955572    3300 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:19:55.955609    3300 notify.go:220] Checking for updates...
	I0827 15:19:55.955820    3300 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:19:55.955827    3300 status.go:255] checking status of multinode-437000 ...
	I0827 15:19:55.956136    3300 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:19:55.956141    3300 status.go:343] host is not running, skipping remaining checks
	I0827 15:19:55.956144    3300 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (73.560125ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:20:17.939723    3302 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:20:17.939939    3302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:17.939944    3302 out.go:358] Setting ErrFile to fd 2...
	I0827 15:20:17.939946    3302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:17.940114    3302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:20:17.940304    3302 out.go:352] Setting JSON to false
	I0827 15:20:17.940317    3302 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:20:17.940357    3302 notify.go:220] Checking for updates...
	I0827 15:20:17.940594    3302 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:20:17.940602    3302 status.go:255] checking status of multinode-437000 ...
	I0827 15:20:17.940895    3302 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:20:17.940900    3302 status.go:343] host is not running, skipping remaining checks
	I0827 15:20:17.940903    3302 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (32.881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-437000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-437000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-437000: (2.929232208s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22440325s)

                                                
                                                
-- stdout --
	* [multinode-437000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:20:20.997272    3326 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:20:20.997431    3326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:20.997435    3326 out.go:358] Setting ErrFile to fd 2...
	I0827 15:20:20.997438    3326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:20.997617    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:20:20.998739    3326 out.go:352] Setting JSON to false
	I0827 15:20:21.018040    3326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2986,"bootTime":1724794235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:20:21.018133    3326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:20:21.021743    3326 out.go:177] * [multinode-437000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:20:21.028732    3326 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:20:21.028777    3326 notify.go:220] Checking for updates...
	I0827 15:20:21.035656    3326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:20:21.038723    3326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:20:21.041720    3326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:20:21.044669    3326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:20:21.047695    3326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:20:21.050908    3326 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:20:21.050962    3326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:20:21.055675    3326 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:20:21.061699    3326 start.go:297] selected driver: qemu2
	I0827 15:20:21.061707    3326 start.go:901] validating driver "qemu2" against &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:20:21.061774    3326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:20:21.064276    3326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:20:21.064321    3326 cni.go:84] Creating CNI manager for ""
	I0827 15:20:21.064327    3326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0827 15:20:21.064385    3326 start.go:340] cluster config:
	{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:20:21.068172    3326 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:21.076721    3326 out.go:177] * Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	I0827 15:20:21.080686    3326 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:20:21.080703    3326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:20:21.080714    3326 cache.go:56] Caching tarball of preloaded images
	I0827 15:20:21.080773    3326 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:20:21.080778    3326 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:20:21.080847    3326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/multinode-437000/config.json ...
	I0827 15:20:21.081271    3326 start.go:360] acquireMachinesLock for multinode-437000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:20:21.081305    3326 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "multinode-437000"
	I0827 15:20:21.081315    3326 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:20:21.081323    3326 fix.go:54] fixHost starting: 
	I0827 15:20:21.081439    3326 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W0827 15:20:21.081448    3326 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:20:21.089696    3326 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I0827 15:20:21.093704    3326 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:20:21.093742    3326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c7:57:35:d3:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:20:21.095737    3326 main.go:141] libmachine: STDOUT: 
	I0827 15:20:21.095758    3326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:20:21.095789    3326 fix.go:56] duration metric: took 14.4675ms for fixHost
	I0827 15:20:21.095795    3326 start.go:83] releasing machines lock for "multinode-437000", held for 14.485ms
	W0827 15:20:21.095806    3326 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:20:21.095840    3326 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:20:21.095844    3326 start.go:729] Will try again in 5 seconds ...
	I0827 15:20:26.097960    3326 start.go:360] acquireMachinesLock for multinode-437000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:20:26.098336    3326 start.go:364] duration metric: took 274.458µs to acquireMachinesLock for "multinode-437000"
	I0827 15:20:26.098438    3326 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:20:26.098457    3326 fix.go:54] fixHost starting: 
	I0827 15:20:26.099163    3326 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W0827 15:20:26.099188    3326 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:20:26.107526    3326 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I0827 15:20:26.111588    3326 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:20:26.111887    3326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c7:57:35:d3:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:20:26.121169    3326 main.go:141] libmachine: STDOUT: 
	I0827 15:20:26.121231    3326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:20:26.121296    3326 fix.go:56] duration metric: took 22.839375ms for fixHost
	I0827 15:20:26.121316    3326 start.go:83] releasing machines lock for "multinode-437000", held for 22.959791ms
	W0827 15:20:26.121474    3326 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:20:26.129504    3326 out.go:201] 
	W0827 15:20:26.133543    3326 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:20:26.133567    3326 out.go:270] * 
	* 
	W0827 15:20:26.136504    3326 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:20:26.145549    3326 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-437000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-437000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (32.604042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 node delete m03: exit status 83 (39.74675ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-437000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr: exit status 7 (30.302042ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:20:26.330080    3340 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:20:26.330235    3340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:26.330241    3340 out.go:358] Setting ErrFile to fd 2...
	I0827 15:20:26.330244    3340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:26.330376    3340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:20:26.330490    3340 out.go:352] Setting JSON to false
	I0827 15:20:26.330500    3340 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:20:26.330553    3340 notify.go:220] Checking for updates...
	I0827 15:20:26.330720    3340 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:20:26.330725    3340 status.go:255] checking status of multinode-437000 ...
	I0827 15:20:26.330934    3340 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:20:26.330938    3340 status.go:343] host is not running, skipping remaining checks
	I0827 15:20:26.330941    3340 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (29.890666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-437000 stop: (2.933052s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status: exit status 7 (62.123ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr: exit status 7 (33.069042ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:20:29.388821    3364 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:20:29.388981    3364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:29.388984    3364 out.go:358] Setting ErrFile to fd 2...
	I0827 15:20:29.388987    3364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:29.389115    3364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:20:29.389236    3364 out.go:352] Setting JSON to false
	I0827 15:20:29.389246    3364 mustload.go:65] Loading cluster: multinode-437000
	I0827 15:20:29.389296    3364 notify.go:220] Checking for updates...
	I0827 15:20:29.389430    3364 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:20:29.389437    3364 status.go:255] checking status of multinode-437000 ...
	I0827 15:20:29.389635    3364 status.go:330] multinode-437000 host status = "Stopped" (err=<nil>)
	I0827 15:20:29.389640    3364 status.go:343] host is not running, skipping remaining checks
	I0827 15:20:29.389642    3364 status.go:257] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr": multinode-437000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr": multinode-437000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (30.483958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188714708s)

                                                
                                                
-- stdout --
	* [multinode-437000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:20:29.449439    3368 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:20:29.449585    3368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:29.449588    3368 out.go:358] Setting ErrFile to fd 2...
	I0827 15:20:29.449591    3368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:29.449734    3368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:20:29.450827    3368 out.go:352] Setting JSON to false
	I0827 15:20:29.467079    3368 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2994,"bootTime":1724794235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:20:29.467154    3368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:20:29.472238    3368 out.go:177] * [multinode-437000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:20:29.479056    3368 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:20:29.479111    3368 notify.go:220] Checking for updates...
	I0827 15:20:29.487189    3368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:20:29.491164    3368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:20:29.495193    3368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:20:29.498246    3368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:20:29.501170    3368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:20:29.504734    3368 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:20:29.504988    3368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:20:29.509082    3368 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:20:29.516197    3368 start.go:297] selected driver: qemu2
	I0827 15:20:29.516203    3368 start.go:901] validating driver "qemu2" against &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:20:29.516273    3368 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:20:29.518713    3368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:20:29.518751    3368 cni.go:84] Creating CNI manager for ""
	I0827 15:20:29.518755    3368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0827 15:20:29.518806    3368 start.go:340] cluster config:
	{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-437000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:20:29.522566    3368 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:29.531193    3368 out.go:177] * Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	I0827 15:20:29.535162    3368 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:20:29.535178    3368 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:20:29.535189    3368 cache.go:56] Caching tarball of preloaded images
	I0827 15:20:29.535241    3368 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:20:29.535246    3368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:20:29.535316    3368 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/multinode-437000/config.json ...
	I0827 15:20:29.535739    3368 start.go:360] acquireMachinesLock for multinode-437000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:20:29.535768    3368 start.go:364] duration metric: took 23.416µs to acquireMachinesLock for "multinode-437000"
	I0827 15:20:29.535778    3368 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:20:29.535783    3368 fix.go:54] fixHost starting: 
	I0827 15:20:29.535898    3368 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W0827 15:20:29.535906    3368 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:20:29.544206    3368 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I0827 15:20:29.548175    3368 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:20:29.548210    3368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c7:57:35:d3:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:20:29.550172    3368 main.go:141] libmachine: STDOUT: 
	I0827 15:20:29.550194    3368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:20:29.550222    3368 fix.go:56] duration metric: took 14.439583ms for fixHost
	I0827 15:20:29.550226    3368 start.go:83] releasing machines lock for "multinode-437000", held for 14.4535ms
	W0827 15:20:29.550233    3368 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:20:29.550259    3368 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:20:29.550264    3368 start.go:729] Will try again in 5 seconds ...
	I0827 15:20:34.551012    3368 start.go:360] acquireMachinesLock for multinode-437000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:20:34.551359    3368 start.go:364] duration metric: took 272.375µs to acquireMachinesLock for "multinode-437000"
	I0827 15:20:34.551487    3368 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:20:34.551538    3368 fix.go:54] fixHost starting: 
	I0827 15:20:34.552216    3368 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W0827 15:20:34.552242    3368 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:20:34.556702    3368 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I0827 15:20:34.564610    3368 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:20:34.564871    3368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c7:57:35:d3:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/multinode-437000/disk.qcow2
	I0827 15:20:34.573641    3368 main.go:141] libmachine: STDOUT: 
	I0827 15:20:34.573705    3368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:20:34.573766    3368 fix.go:56] duration metric: took 22.259375ms for fixHost
	I0827 15:20:34.573783    3368 start.go:83] releasing machines lock for "multinode-437000", held for 22.398584ms
	W0827 15:20:34.573982    3368 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:20:34.581438    3368 out.go:201] 
	W0827 15:20:34.585621    3368 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:20:34.585639    3368 out.go:270] * 
	* 
	W0827 15:20:34.588180    3368 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:20:34.596604    3368 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (68.177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-437000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000-m01 --driver=qemu2 : exit status 80 (10.022541416s)

                                                
                                                
-- stdout --
	* [multinode-437000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-437000-m01" primary control-plane node in "multinode-437000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-437000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000-m02 --driver=qemu2 
E0827 15:20:44.814702    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000-m02 --driver=qemu2 : exit status 80 (10.000322375s)

                                                
                                                
-- stdout --
	* [multinode-437000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-437000-m02" primary control-plane node in "multinode-437000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-437000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-437000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-437000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-437000: exit status 83 (78.538959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-437000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (30.63275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.25s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-640000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-640000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.819392042s)

                                                
                                                
-- stdout --
	* [test-preload-640000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-640000" primary control-plane node in "test-preload-640000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-640000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:20:55.071397    3421 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:20:55.071518    3421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:55.071522    3421 out.go:358] Setting ErrFile to fd 2...
	I0827 15:20:55.071530    3421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:20:55.071666    3421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:20:55.072753    3421 out.go:352] Setting JSON to false
	I0827 15:20:55.088612    3421 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3020,"bootTime":1724794235,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:20:55.088724    3421 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:20:55.094611    3421 out.go:177] * [test-preload-640000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:20:55.102837    3421 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:20:55.102880    3421 notify.go:220] Checking for updates...
	I0827 15:20:55.111734    3421 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:20:55.115814    3421 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:20:55.118788    3421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:20:55.121766    3421 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:20:55.124840    3421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:20:55.128023    3421 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:20:55.128081    3421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:20:55.131712    3421 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:20:55.138773    3421 start.go:297] selected driver: qemu2
	I0827 15:20:55.138780    3421 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:20:55.138787    3421 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:20:55.141072    3421 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:20:55.143780    3421 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:20:55.147866    3421 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:20:55.147892    3421 cni.go:84] Creating CNI manager for ""
	I0827 15:20:55.147907    3421 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:20:55.147914    3421 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:20:55.147959    3421 start.go:340] cluster config:
	{Name:test-preload-640000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-640000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:20:55.151692    3421 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.159821    3421 out.go:177] * Starting "test-preload-640000" primary control-plane node in "test-preload-640000" cluster
	I0827 15:20:55.163772    3421 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0827 15:20:55.163862    3421 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/test-preload-640000/config.json ...
	I0827 15:20:55.163892    3421 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/test-preload-640000/config.json: {Name:mkdb032ee38d66abd40ade53344ce1a91d5f2585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:20:55.163889    3421 cache.go:107] acquiring lock: {Name:mk7af5ae5cf7ecca7233f020552354182cef7918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.163895    3421 cache.go:107] acquiring lock: {Name:mk9cd0c7c19eac74df4c15e003408f4bc87e06ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.163928    3421 cache.go:107] acquiring lock: {Name:mk1ca69a6bd7e68d6ce5563d00f63e8575e4ab77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.164060    3421 cache.go:107] acquiring lock: {Name:mkb13a5674ae584634b86486bcce5a79b960f96b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.164072    3421 cache.go:107] acquiring lock: {Name:mk1cf371fbc2d86267022f523ebf0d684fa64345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.164097    3421 cache.go:107] acquiring lock: {Name:mk878f4a98358d45dd28890bf303c805c02c52b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.164053    3421 cache.go:107] acquiring lock: {Name:mk73c12a42b6da6931c401dc683c4805e4a51cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.164147    3421 cache.go:107] acquiring lock: {Name:mk17d2b1a013ced3e74022ed56909816f87bfb8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:20:55.164169    3421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 15:20:55.164248    3421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0827 15:20:55.164259    3421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:20:55.164359    3421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:20:55.164369    3421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0827 15:20:55.164386    3421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0827 15:20:55.164420    3421 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0827 15:20:55.164326    3421 start.go:360] acquireMachinesLock for test-preload-640000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:20:55.164547    3421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:20:55.164573    3421 start.go:364] duration metric: took 116.625µs to acquireMachinesLock for "test-preload-640000"
	I0827 15:20:55.164585    3421 start.go:93] Provisioning new machine with config: &{Name:test-preload-640000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-640000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:20:55.164623    3421 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:20:55.172740    3421 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:20:55.177740    3421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:20:55.177799    3421 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0827 15:20:55.177879    3421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 15:20:55.177998    3421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:20:55.178412    3421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0827 15:20:55.179378    3421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:20:55.179729    3421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0827 15:20:55.180081    3421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0827 15:20:55.191826    3421 start.go:159] libmachine.API.Create for "test-preload-640000" (driver="qemu2")
	I0827 15:20:55.191854    3421 client.go:168] LocalClient.Create starting
	I0827 15:20:55.191928    3421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:20:55.191959    3421 main.go:141] libmachine: Decoding PEM data...
	I0827 15:20:55.191969    3421 main.go:141] libmachine: Parsing certificate...
	I0827 15:20:55.192017    3421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:20:55.192048    3421 main.go:141] libmachine: Decoding PEM data...
	I0827 15:20:55.192059    3421 main.go:141] libmachine: Parsing certificate...
	I0827 15:20:55.192431    3421 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:20:55.350402    3421 main.go:141] libmachine: Creating SSH key...
	I0827 15:20:55.406281    3421 main.go:141] libmachine: Creating Disk image...
	I0827 15:20:55.406306    3421 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:20:55.406601    3421 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2
	I0827 15:20:55.416986    3421 main.go:141] libmachine: STDOUT: 
	I0827 15:20:55.417009    3421 main.go:141] libmachine: STDERR: 
	I0827 15:20:55.417064    3421 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2 +20000M
	I0827 15:20:55.426303    3421 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:20:55.426322    3421 main.go:141] libmachine: STDERR: 
	I0827 15:20:55.426333    3421 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2
	I0827 15:20:55.426337    3421 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:20:55.426352    3421 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:20:55.426381    3421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ec:50:73:c4:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2
	I0827 15:20:55.428443    3421 main.go:141] libmachine: STDOUT: 
	I0827 15:20:55.428461    3421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:20:55.428477    3421 client.go:171] duration metric: took 236.621208ms to LocalClient.Create
	W0827 15:20:56.006192    3421 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0827 15:20:56.006337    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0827 15:20:56.208893    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0827 15:20:56.218074    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0827 15:20:56.219858    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0827 15:20:56.219888    3421 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.056023s
	I0827 15:20:56.219916    3421 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	W0827 15:20:56.248211    3421 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0827 15:20:56.248310    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0827 15:20:56.259162    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0827 15:20:56.381824    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0827 15:20:56.381867    3421 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.217854375s
	I0827 15:20:56.381894    3421 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0827 15:20:56.412556    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0827 15:20:56.416967    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0827 15:20:56.421620    3421 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0827 15:20:57.428848    3421 start.go:128] duration metric: took 2.264220375s to createHost
	I0827 15:20:57.428902    3421 start.go:83] releasing machines lock for "test-preload-640000", held for 2.264347167s
	W0827 15:20:57.428964    3421 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:20:57.445183    3421 out.go:177] * Deleting "test-preload-640000" in qemu2 ...
	W0827 15:20:57.477970    3421 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:20:57.477998    3421 start.go:729] Will try again in 5 seconds ...
	I0827 15:20:58.658640    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0827 15:20:58.658697    3421 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.494724292s
	I0827 15:20:58.658742    3421 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0827 15:20:59.602464    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0827 15:20:59.602532    3421 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.438486375s
	I0827 15:20:59.602558    3421 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0827 15:20:59.643716    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0827 15:20:59.643752    3421 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.479925s
	I0827 15:20:59.643773    3421 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0827 15:21:00.472107    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0827 15:21:00.472161    3421 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.308119s
	I0827 15:21:00.472185    3421 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0827 15:21:01.254685    3421 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0827 15:21:01.254734    3421 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.090903s
	I0827 15:21:01.254759    3421 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0827 15:21:02.478256    3421 start.go:360] acquireMachinesLock for test-preload-640000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:21:02.478744    3421 start.go:364] duration metric: took 404.75µs to acquireMachinesLock for "test-preload-640000"
	I0827 15:21:02.478880    3421 start.go:93] Provisioning new machine with config: &{Name:test-preload-640000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-640000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:21:02.479090    3421 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:21:02.489734    3421 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:21:02.538750    3421 start.go:159] libmachine.API.Create for "test-preload-640000" (driver="qemu2")
	I0827 15:21:02.538818    3421 client.go:168] LocalClient.Create starting
	I0827 15:21:02.538955    3421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:21:02.539040    3421 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:02.539057    3421 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:02.539125    3421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:21:02.539170    3421 main.go:141] libmachine: Decoding PEM data...
	I0827 15:21:02.539185    3421 main.go:141] libmachine: Parsing certificate...
	I0827 15:21:02.539710    3421 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:21:02.699527    3421 main.go:141] libmachine: Creating SSH key...
	I0827 15:21:02.793697    3421 main.go:141] libmachine: Creating Disk image...
	I0827 15:21:02.793707    3421 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:21:02.793930    3421 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2
	I0827 15:21:02.803442    3421 main.go:141] libmachine: STDOUT: 
	I0827 15:21:02.803468    3421 main.go:141] libmachine: STDERR: 
	I0827 15:21:02.803516    3421 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2 +20000M
	I0827 15:21:02.811740    3421 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:21:02.811776    3421 main.go:141] libmachine: STDERR: 
	I0827 15:21:02.811796    3421 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2
	I0827 15:21:02.811808    3421 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:21:02.811814    3421 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:21:02.811861    3421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:8c:d0:a4:89:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/test-preload-640000/disk.qcow2
	I0827 15:21:02.813620    3421 main.go:141] libmachine: STDOUT: 
	I0827 15:21:02.813635    3421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:21:02.813651    3421 client.go:171] duration metric: took 274.825959ms to LocalClient.Create
	I0827 15:21:04.815884    3421 start.go:128] duration metric: took 2.33676825s to createHost
	I0827 15:21:04.815941    3421 start.go:83] releasing machines lock for "test-preload-640000", held for 2.337200416s
	W0827 15:21:04.816181    3421 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-640000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-640000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:21:04.832664    3421 out.go:201] 
	W0827 15:21:04.835826    3421 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:21:04.835863    3421 out.go:270] * 
	* 
	W0827 15:21:04.838383    3421 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:21:04.847629    3421 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-640000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-27 15:21:04.866323 -0700 PDT m=+2673.469071710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-640000 -n test-preload-640000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-640000 -n test-preload-640000: exit status 7 (64.643041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-640000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-640000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-640000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-585000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-585000 --memory=2048 --driver=qemu2 : exit status 80 (9.872551542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-585000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-585000" primary control-plane node in "scheduled-stop-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-585000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-585000" primary control-plane node in "scheduled-stop-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-27 15:21:14.806535 -0700 PDT m=+2683.485570585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-585000 -n scheduled-stop-585000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-585000 -n scheduled-stop-585000: exit status 7 (68.325833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-585000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-585000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (12.39s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2838806762 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2838806762 version: (1.05490975s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-434000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-434000 --memory=2600 --driver=qemu2 : exit status 80 (10.014710417s)

                                                
                                                
-- stdout --
	* [skaffold-434000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-434000" primary control-plane node in "skaffold-434000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-434000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-434000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-434000" primary control-plane node in "skaffold-434000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-434000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-27 15:21:27.204911 -0700 PDT m=+2695.884350835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-434000 -n skaffold-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-434000 -n skaffold-434000: exit status 7 (62.416125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-434000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-434000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-434000
--- FAIL: TestSkaffold (12.39s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4117167190 start -p running-upgrade-301000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4117167190 start -p running-upgrade-301000 --memory=2200 --vm-driver=qemu2 : (52.999680042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-301000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0827 15:23:47.823677    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 15:24:24.519152    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-301000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.339818916s)

                                                
                                                
-- stdout --
	* [running-upgrade-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-301000" primary control-plane node in "running-upgrade-301000" cluster
	* Updating the running qemu2 "running-upgrade-301000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:23:03.292637    3801 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:23:03.292787    3801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:23:03.292795    3801 out.go:358] Setting ErrFile to fd 2...
	I0827 15:23:03.292797    3801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:23:03.292937    3801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:23:03.294078    3801 out.go:352] Setting JSON to false
	I0827 15:23:03.310781    3801 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3148,"bootTime":1724794235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:23:03.310862    3801 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:23:03.318788    3801 out.go:177] * [running-upgrade-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:23:03.326929    3801 notify.go:220] Checking for updates...
	I0827 15:23:03.330780    3801 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:23:03.342367    3801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:23:03.346846    3801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:23:03.349726    3801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:23:03.352786    3801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:23:03.355872    3801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:23:03.358988    3801 config.go:182] Loaded profile config "running-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:23:03.362724    3801 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0827 15:23:03.365823    3801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:23:03.369729    3801 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:23:03.376789    3801 start.go:297] selected driver: qemu2
	I0827 15:23:03.376796    3801 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50266 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:23:03.376861    3801 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:23:03.379064    3801 cni.go:84] Creating CNI manager for ""
	I0827 15:23:03.379080    3801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:23:03.379132    3801 start.go:340] cluster config:
	{Name:running-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50266 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:23:03.379192    3801 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:23:03.386790    3801 out.go:177] * Starting "running-upgrade-301000" primary control-plane node in "running-upgrade-301000" cluster
	I0827 15:23:03.390672    3801 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0827 15:23:03.390688    3801 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0827 15:23:03.390696    3801 cache.go:56] Caching tarball of preloaded images
	I0827 15:23:03.390756    3801 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:23:03.390761    3801 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0827 15:23:03.390811    3801 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/config.json ...
	I0827 15:23:03.391228    3801 start.go:360] acquireMachinesLock for running-upgrade-301000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:23:03.391262    3801 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "running-upgrade-301000"
	I0827 15:23:03.391270    3801 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:23:03.391276    3801 fix.go:54] fixHost starting: 
	I0827 15:23:03.391990    3801 fix.go:112] recreateIfNeeded on running-upgrade-301000: state=Running err=<nil>
	W0827 15:23:03.391999    3801 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:23:03.396814    3801 out.go:177] * Updating the running qemu2 "running-upgrade-301000" VM ...
	I0827 15:23:03.403734    3801 machine.go:93] provisionDockerMachine start ...
	I0827 15:23:03.403773    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.403874    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.403879    3801 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 15:23:03.473390    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-301000
	
	I0827 15:23:03.473406    3801 buildroot.go:166] provisioning hostname "running-upgrade-301000"
	I0827 15:23:03.473453    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.473574    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.473582    3801 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-301000 && echo "running-upgrade-301000" | sudo tee /etc/hostname
	I0827 15:23:03.526037    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-301000
	
	I0827 15:23:03.526083    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.526190    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.526200    3801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-301000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-301000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-301000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 15:23:03.576842    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 15:23:03.576854    3801 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19522-983/.minikube CaCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19522-983/.minikube}
	I0827 15:23:03.576868    3801 buildroot.go:174] setting up certificates
	I0827 15:23:03.576873    3801 provision.go:84] configureAuth start
	I0827 15:23:03.576879    3801 provision.go:143] copyHostCerts
	I0827 15:23:03.576966    3801 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem, removing ...
	I0827 15:23:03.576971    3801 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem
	I0827 15:23:03.577091    3801 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem (1078 bytes)
	I0827 15:23:03.577260    3801 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem, removing ...
	I0827 15:23:03.577265    3801 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem
	I0827 15:23:03.577306    3801 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem (1123 bytes)
	I0827 15:23:03.577405    3801 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem, removing ...
	I0827 15:23:03.577408    3801 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem
	I0827 15:23:03.577449    3801 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem (1675 bytes)
	I0827 15:23:03.577537    3801 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-301000 san=[127.0.0.1 localhost minikube running-upgrade-301000]
	I0827 15:23:03.714635    3801 provision.go:177] copyRemoteCerts
	I0827 15:23:03.714676    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 15:23:03.714686    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:23:03.742333    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 15:23:03.749653    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0827 15:23:03.756427    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 15:23:03.763114    3801 provision.go:87] duration metric: took 186.243875ms to configureAuth
	I0827 15:23:03.763123    3801 buildroot.go:189] setting minikube options for container-runtime
	I0827 15:23:03.763235    3801 config.go:182] Loaded profile config "running-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:23:03.763267    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.763352    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.763356    3801 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0827 15:23:03.818158    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0827 15:23:03.818168    3801 buildroot.go:70] root file system type: tmpfs
	I0827 15:23:03.818226    3801 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0827 15:23:03.818283    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.818404    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.818440    3801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0827 15:23:03.876507    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0827 15:23:03.876564    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.876686    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.876694    3801 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0827 15:23:03.938670    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 15:23:03.938682    3801 machine.go:96] duration metric: took 534.9595ms to provisionDockerMachine
	I0827 15:23:03.938693    3801 start.go:293] postStartSetup for "running-upgrade-301000" (driver="qemu2")
	I0827 15:23:03.938699    3801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 15:23:03.938753    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 15:23:03.938761    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:23:03.966364    3801 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 15:23:03.967778    3801 info.go:137] Remote host: Buildroot 2021.02.12
	I0827 15:23:03.967785    3801 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19522-983/.minikube/addons for local assets ...
	I0827 15:23:03.967842    3801 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19522-983/.minikube/files for local assets ...
	I0827 15:23:03.967939    3801 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem -> 14632.pem in /etc/ssl/certs
	I0827 15:23:03.968029    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 15:23:03.971007    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem --> /etc/ssl/certs/14632.pem (1708 bytes)
	I0827 15:23:03.978496    3801 start.go:296] duration metric: took 39.799834ms for postStartSetup
	I0827 15:23:03.978510    3801 fix.go:56] duration metric: took 587.25525ms for fixHost
	I0827 15:23:03.978544    3801 main.go:141] libmachine: Using SSH client type: native
	I0827 15:23:03.978645    3801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011fc5a0] 0x1011fee00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0827 15:23:03.978650    3801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 15:23:04.029147    3801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797383.751274846
	
	I0827 15:23:04.029157    3801 fix.go:216] guest clock: 1724797383.751274846
	I0827 15:23:04.029161    3801 fix.go:229] Guest: 2024-08-27 15:23:03.751274846 -0700 PDT Remote: 2024-08-27 15:23:03.978511 -0700 PDT m=+0.705371126 (delta=-227.236154ms)
	I0827 15:23:04.029173    3801 fix.go:200] guest clock delta is within tolerance: -227.236154ms
	I0827 15:23:04.029176    3801 start.go:83] releasing machines lock for "running-upgrade-301000", held for 637.9315ms
	I0827 15:23:04.029241    3801 ssh_runner.go:195] Run: cat /version.json
	I0827 15:23:04.029251    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:23:04.029265    3801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 15:23:04.029279    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	W0827 15:23:04.029869    3801 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50234: connect: connection refused
	I0827 15:23:04.029889    3801 retry.go:31] will retry after 172.998775ms: dial tcp [::1]:50234: connect: connection refused
	W0827 15:23:04.057263    3801 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0827 15:23:04.057310    3801 ssh_runner.go:195] Run: systemctl --version
	I0827 15:23:04.059233    3801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 15:23:04.060920    3801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 15:23:04.060947    3801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0827 15:23:04.063602    3801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0827 15:23:04.067932    3801 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 15:23:04.067941    3801 start.go:495] detecting cgroup driver to use...
	I0827 15:23:04.068002    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 15:23:04.073843    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0827 15:23:04.076870    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 15:23:04.079575    3801 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 15:23:04.079595    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 15:23:04.082890    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 15:23:04.085991    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 15:23:04.089083    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 15:23:04.092072    3801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 15:23:04.095341    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 15:23:04.098805    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0827 15:23:04.102131    3801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0827 15:23:04.105043    3801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 15:23:04.107559    3801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 15:23:04.110752    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:23:04.193047    3801 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 15:23:04.199081    3801 start.go:495] detecting cgroup driver to use...
	I0827 15:23:04.199146    3801 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0827 15:23:04.206593    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 15:23:04.212429    3801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 15:23:04.222625    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 15:23:04.228376    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 15:23:04.233026    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 15:23:04.238121    3801 ssh_runner.go:195] Run: which cri-dockerd
	I0827 15:23:04.240350    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0827 15:23:04.282115    3801 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0827 15:23:04.287224    3801 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0827 15:23:04.377716    3801 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0827 15:23:04.453737    3801 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0827 15:23:04.453795    3801 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0827 15:23:04.459013    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:23:04.538499    3801 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 15:23:17.236835    3801 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.698738333s)
	I0827 15:23:17.236910    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0827 15:23:17.241778    3801 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0827 15:23:17.250834    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 15:23:17.255948    3801 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0827 15:23:17.337886    3801 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0827 15:23:17.403108    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:23:17.461501    3801 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0827 15:23:17.467787    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 15:23:17.472808    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:23:17.537705    3801 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0827 15:23:17.576762    3801 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0827 15:23:17.576838    3801 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0827 15:23:17.579135    3801 start.go:563] Will wait 60s for crictl version
	I0827 15:23:17.579189    3801 ssh_runner.go:195] Run: which crictl
	I0827 15:23:17.580783    3801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 15:23:17.592739    3801 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0827 15:23:17.592818    3801 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 15:23:17.605232    3801 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 15:23:17.623780    3801 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0827 15:23:17.623917    3801 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0827 15:23:17.625255    3801 kubeadm.go:883] updating cluster {Name:running-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50266 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0827 15:23:17.625302    3801 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0827 15:23:17.625341    3801 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 15:23:17.639918    3801 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 15:23:17.639929    3801 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0827 15:23:17.639975    3801 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0827 15:23:17.642921    3801 ssh_runner.go:195] Run: which lz4
	I0827 15:23:17.644090    3801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 15:23:17.645209    3801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 15:23:17.645218    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0827 15:23:18.576623    3801 docker.go:649] duration metric: took 932.592666ms to copy over tarball
	I0827 15:23:18.576709    3801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 15:23:19.711728    3801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.13503825s)
	I0827 15:23:19.711745    3801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 15:23:19.729052    3801 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0827 15:23:19.732482    3801 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0827 15:23:19.737110    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:23:19.798461    3801 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 15:23:21.084889    3801 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.28645375s)
	I0827 15:23:21.084972    3801 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 15:23:21.096935    3801 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 15:23:21.096942    3801 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0827 15:23:21.096947    3801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0827 15:23:21.100823    3801 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:23:21.102655    3801 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:23:21.104161    3801 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:23:21.104212    3801 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:23:21.105559    3801 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:23:21.105645    3801 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:23:21.106970    3801 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:23:21.107123    3801 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:23:21.108336    3801 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:23:21.108641    3801 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:23:21.109402    3801 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0827 15:23:21.109560    3801 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:23:21.110452    3801 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:23:21.110687    3801 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:23:21.111260    3801 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0827 15:23:21.112399    3801 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0827 15:23:21.886969    3801 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0827 15:23:21.887669    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:23:21.928904    3801 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0827 15:23:21.928955    3801 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:23:21.929058    3801 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:23:21.956238    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0827 15:23:21.956404    3801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0827 15:23:21.958688    3801 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0827 15:23:21.958707    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0827 15:23:21.995450    3801 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0827 15:23:21.995471    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0827 15:23:22.108855    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:23:22.122296    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:23:22.154447    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:23:22.210104    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:23:22.248565    3801 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0827 15:23:22.248596    3801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0827 15:23:22.248609    3801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0827 15:23:22.248614    3801 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:23:22.248619    3801 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:23:22.248670    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:23:22.248670    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:23:22.248688    3801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0827 15:23:22.248687    3801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0827 15:23:22.248695    3801 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:23:22.248707    3801 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:23:22.248709    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:23:22.248732    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:23:22.250803    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0827 15:23:22.252217    3801 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0827 15:23:22.252301    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:23:22.262874    3801 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0827 15:23:22.270849    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0827 15:23:22.282454    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0827 15:23:22.282510    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0827 15:23:22.282513    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0827 15:23:22.286786    3801 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0827 15:23:22.286804    3801 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:23:22.286854    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0827 15:23:22.288353    3801 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0827 15:23:22.288367    3801 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:23:22.288395    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:23:22.313674    3801 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0827 15:23:22.313699    3801 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0827 15:23:22.313704    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0827 15:23:22.313705    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0827 15:23:22.313747    3801 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0827 15:23:22.313816    3801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0827 15:23:22.323897    3801 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0827 15:23:22.323928    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0827 15:23:22.323943    3801 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0827 15:23:22.324043    3801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0827 15:23:22.325789    3801 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0827 15:23:22.325807    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0827 15:23:22.347950    3801 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0827 15:23:22.347964    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0827 15:23:22.398287    3801 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0827 15:23:22.398308    3801 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0827 15:23:22.398317    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0827 15:23:22.434559    3801 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0827 15:23:22.434595    3801 cache_images.go:92] duration metric: took 1.337686542s to LoadCachedImages
	W0827 15:23:22.434632    3801 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0827 15:23:22.434638    3801 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0827 15:23:22.434699    3801 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-301000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 15:23:22.434760    3801 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0827 15:23:22.453805    3801 cni.go:84] Creating CNI manager for ""
	I0827 15:23:22.453818    3801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:23:22.453822    3801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 15:23:22.453830    3801 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-301000 NodeName:running-upgrade-301000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 15:23:22.453895    3801 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-301000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 15:23:22.453956    3801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0827 15:23:22.457561    3801 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 15:23:22.457595    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 15:23:22.460113    3801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0827 15:23:22.465145    3801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 15:23:22.469822    3801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0827 15:23:22.475238    3801 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0827 15:23:22.476561    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:23:22.538480    3801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:23:22.543754    3801 certs.go:68] Setting up /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000 for IP: 10.0.2.15
	I0827 15:23:22.543761    3801 certs.go:194] generating shared ca certs ...
	I0827 15:23:22.543769    3801 certs.go:226] acquiring lock for ca certs: {Name:mkc3f4287026c100ff774c65b8333a833cfe8f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:23:22.543915    3801 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19522-983/.minikube/ca.key
	I0827 15:23:22.543949    3801 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.key
	I0827 15:23:22.543954    3801 certs.go:256] generating profile certs ...
	I0827 15:23:22.544015    3801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.key
	I0827 15:23:22.544034    3801 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.key.a19d6508
	I0827 15:23:22.544048    3801 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.crt.a19d6508 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0827 15:23:22.630419    3801 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.crt.a19d6508 ...
	I0827 15:23:22.630427    3801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.crt.a19d6508: {Name:mk1db52b2db99a10dae5761380bffff1039aeccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:23:22.630885    3801 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.key.a19d6508 ...
	I0827 15:23:22.630892    3801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.key.a19d6508: {Name:mk27891456e07baf91bdf906cbc4d85d8c564044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:23:22.631053    3801 certs.go:381] copying /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.crt.a19d6508 -> /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.crt
	I0827 15:23:22.631702    3801 certs.go:385] copying /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.key.a19d6508 -> /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.key
	I0827 15:23:22.631839    3801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/proxy-client.key
	I0827 15:23:22.631966    3801 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463.pem (1338 bytes)
	W0827 15:23:22.631993    3801 certs.go:480] ignoring /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463_empty.pem, impossibly tiny 0 bytes
	I0827 15:23:22.631998    3801 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem (1679 bytes)
	I0827 15:23:22.632018    3801 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem (1078 bytes)
	I0827 15:23:22.632036    3801 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem (1123 bytes)
	I0827 15:23:22.632056    3801 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem (1675 bytes)
	I0827 15:23:22.632094    3801 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem (1708 bytes)
	I0827 15:23:22.632403    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 15:23:22.640048    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0827 15:23:22.647489    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 15:23:22.654902    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0827 15:23:22.662144    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0827 15:23:22.668677    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 15:23:22.675213    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 15:23:22.682675    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 15:23:22.689865    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463.pem --> /usr/share/ca-certificates/1463.pem (1338 bytes)
	I0827 15:23:22.696563    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem --> /usr/share/ca-certificates/14632.pem (1708 bytes)
	I0827 15:23:22.703152    3801 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 15:23:22.710373    3801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 15:23:22.715785    3801 ssh_runner.go:195] Run: openssl version
	I0827 15:23:22.717755    3801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463.pem && ln -fs /usr/share/ca-certificates/1463.pem /etc/ssl/certs/1463.pem"
	I0827 15:23:22.720794    3801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463.pem
	I0827 15:23:22.722325    3801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 21:43 /usr/share/ca-certificates/1463.pem
	I0827 15:23:22.722350    3801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463.pem
	I0827 15:23:22.724263    3801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1463.pem /etc/ssl/certs/51391683.0"
	I0827 15:23:22.726927    3801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14632.pem && ln -fs /usr/share/ca-certificates/14632.pem /etc/ssl/certs/14632.pem"
	I0827 15:23:22.730350    3801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14632.pem
	I0827 15:23:22.732157    3801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 21:43 /usr/share/ca-certificates/14632.pem
	I0827 15:23:22.732178    3801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14632.pem
	I0827 15:23:22.734119    3801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14632.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 15:23:22.737164    3801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 15:23:22.739990    3801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:23:22.741423    3801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:23:22.741444    3801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:23:22.743246    3801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 15:23:22.746351    3801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 15:23:22.748031    3801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 15:23:22.749927    3801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 15:23:22.752017    3801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 15:23:22.753836    3801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 15:23:22.755754    3801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 15:23:22.757765    3801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 15:23:22.759757    3801 kubeadm.go:392] StartCluster: {Name:running-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50266 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:23:22.759827    3801 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 15:23:22.770346    3801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 15:23:22.774050    3801 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0827 15:23:22.774055    3801 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0827 15:23:22.774081    3801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0827 15:23:22.777411    3801 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:23:22.777672    3801 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-301000" does not appear in /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:23:22.777724    3801 kubeconfig.go:62] /Users/jenkins/minikube-integration/19522-983/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-301000" cluster setting kubeconfig missing "running-upgrade-301000" context setting]
	I0827 15:23:22.777878    3801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:23:22.778537    3801 kapi.go:59] client config for running-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027b7eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:23:22.778891    3801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0827 15:23:22.781955    3801 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-301000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0827 15:23:22.781961    3801 kubeadm.go:1160] stopping kube-system containers ...
	I0827 15:23:22.782000    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 15:23:22.792502    3801 docker.go:483] Stopping containers: [77dab957ece7 84edec64b72f 615d064d1dcb db8bdf21a995 e1ffb58c1505 1590b19cad8c 584fc61c87fb 04b0058ea0e2 8755897fc0dd edaf8a4f80d5 2a94348d85f5 c05fb7472dfb f55afce344f0]
	I0827 15:23:22.792569    3801 ssh_runner.go:195] Run: docker stop 77dab957ece7 84edec64b72f 615d064d1dcb db8bdf21a995 e1ffb58c1505 1590b19cad8c 584fc61c87fb 04b0058ea0e2 8755897fc0dd edaf8a4f80d5 2a94348d85f5 c05fb7472dfb f55afce344f0
	I0827 15:23:22.803278    3801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0827 15:23:22.896866    3801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:23:22.900657    3801 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 27 22:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 27 22:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 27 22:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 27 22:22 /etc/kubernetes/scheduler.conf
	
	I0827 15:23:22.900700    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/admin.conf
	I0827 15:23:22.903784    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:23:22.903813    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:23:22.906796    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/kubelet.conf
	I0827 15:23:22.910994    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:23:22.911029    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:23:22.914435    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/controller-manager.conf
	I0827 15:23:22.917306    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:23:22.917330    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:23:22.919868    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/scheduler.conf
	I0827 15:23:22.922750    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:23:22.922772    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:23:22.926267    3801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:23:22.929589    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:23:22.953321    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:23:23.376273    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:23:23.562041    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:23:23.587653    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:23:23.610946    3801 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:23:23.611028    3801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:23:24.113412    3801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:23:24.613395    3801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:23:25.112021    3801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:23:25.116097    3801 api_server.go:72] duration metric: took 1.505202458s to wait for apiserver process to appear ...
	I0827 15:23:25.116108    3801 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:23:25.116117    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:23:30.118121    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:23:30.118171    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:23:35.118435    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:23:35.118485    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:23:40.118952    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:23:40.119078    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:23:45.120388    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:23:45.120474    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:23:50.121877    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:23:50.121961    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:23:55.122841    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:23:55.122924    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:00.125041    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:00.125122    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:05.127813    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:05.127895    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:10.130369    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:10.130417    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:15.131102    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:15.131182    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:20.133666    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:20.133767    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:25.136002    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:25.136437    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:24:25.181319    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:24:25.181440    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:24:25.202517    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:24:25.202651    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:24:25.217171    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:24:25.217250    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:24:25.229063    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:24:25.229133    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:24:25.239928    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:24:25.240003    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:24:25.250615    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:24:25.250680    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:24:25.260884    3801 logs.go:276] 0 containers: []
	W0827 15:24:25.260895    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:24:25.260946    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:24:25.271794    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:24:25.271809    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:24:25.271813    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:24:25.289284    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:24:25.289293    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:24:25.327784    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:24:25.327791    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:24:25.346787    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:24:25.346797    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:24:25.358852    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:24:25.358865    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:24:25.370376    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:24:25.370388    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:24:25.381557    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:24:25.381569    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:24:25.397308    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:24:25.397322    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:24:25.408425    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:24:25.408435    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:24:25.433011    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:24:25.433018    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:24:25.507835    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:24:25.507847    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:24:25.522026    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:24:25.522039    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:24:25.541242    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:24:25.541255    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:24:25.558765    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:24:25.558775    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:24:25.573948    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:24:25.573962    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:24:25.579000    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:24:25.579009    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:24:25.593384    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:24:25.593396    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:24:28.111670    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:33.114396    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:33.114821    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:24:33.153894    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:24:33.154021    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:24:33.177377    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:24:33.177490    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:24:33.192104    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:24:33.192187    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:24:33.204634    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:24:33.204702    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:24:33.215438    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:24:33.215503    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:24:33.226406    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:24:33.226476    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:24:33.236673    3801 logs.go:276] 0 containers: []
	W0827 15:24:33.236682    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:24:33.236733    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:24:33.247206    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:24:33.247230    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:24:33.247234    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:24:33.259267    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:24:33.259278    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:24:33.276548    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:24:33.276556    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:24:33.288049    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:24:33.288058    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:24:33.299817    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:24:33.299831    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:24:33.312528    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:24:33.312538    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:24:33.334202    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:24:33.334212    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:24:33.348643    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:24:33.348657    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:24:33.360969    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:24:33.360982    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:24:33.374448    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:24:33.374459    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:24:33.385244    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:24:33.385253    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:24:33.404655    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:24:33.404664    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:24:33.429768    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:24:33.429777    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:24:33.433988    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:24:33.433994    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:24:33.468410    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:24:33.468419    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:24:33.479762    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:24:33.479782    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:24:33.517958    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:24:33.517966    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:24:36.038399    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:41.041253    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:41.041676    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:24:41.082394    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:24:41.082537    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:24:41.103992    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:24:41.104110    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:24:41.121457    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:24:41.121543    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:24:41.133702    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:24:41.133771    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:24:41.143949    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:24:41.144019    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:24:41.158376    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:24:41.158450    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:24:41.168180    3801 logs.go:276] 0 containers: []
	W0827 15:24:41.168192    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:24:41.168259    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:24:41.178845    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:24:41.178862    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:24:41.178867    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:24:41.193889    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:24:41.193899    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:24:41.205352    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:24:41.205363    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:24:41.231392    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:24:41.231405    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:24:41.249679    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:24:41.249689    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:24:41.263845    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:24:41.263857    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:24:41.278206    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:24:41.278218    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:24:41.289266    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:24:41.289278    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:24:41.327058    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:24:41.327068    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:24:41.331855    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:24:41.331863    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:24:41.345897    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:24:41.345906    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:24:41.357821    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:24:41.357833    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:24:41.373062    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:24:41.373074    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:24:41.385517    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:24:41.385534    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:24:41.422243    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:24:41.422252    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:24:41.434213    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:24:41.434227    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:24:41.445562    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:24:41.445575    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:24:43.960418    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:48.962618    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:48.962808    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:24:48.990789    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:24:48.990905    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:24:49.005459    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:24:49.005533    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:24:49.016774    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:24:49.016839    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:24:49.026779    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:24:49.026848    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:24:49.037091    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:24:49.037156    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:24:49.047359    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:24:49.047425    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:24:49.062405    3801 logs.go:276] 0 containers: []
	W0827 15:24:49.062419    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:24:49.062473    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:24:49.072498    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:24:49.072515    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:24:49.072520    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:24:49.086094    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:24:49.086106    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:24:49.100184    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:24:49.100194    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:24:49.113994    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:24:49.114006    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:24:49.125100    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:24:49.125110    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:24:49.129163    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:24:49.129172    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:24:49.146323    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:24:49.146335    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:24:49.172064    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:24:49.172072    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:24:49.187089    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:24:49.187100    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:24:49.200976    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:24:49.200991    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:24:49.215075    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:24:49.215085    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:24:49.226405    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:24:49.226415    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:24:49.241118    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:24:49.241130    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:24:49.252391    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:24:49.252405    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:24:49.288901    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:24:49.288907    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:24:49.322196    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:24:49.322208    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:24:49.339296    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:24:49.339308    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:24:51.853308    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:24:56.855925    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:24:56.856381    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:24:56.896817    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:24:56.896960    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:24:56.918539    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:24:56.918647    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:24:56.934131    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:24:56.934211    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:24:56.947911    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:24:56.947981    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:24:56.959086    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:24:56.959155    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:24:56.969994    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:24:56.970063    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:24:56.980160    3801 logs.go:276] 0 containers: []
	W0827 15:24:56.980171    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:24:56.980227    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:24:56.990962    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:24:56.990976    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:24:56.990980    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:24:57.029672    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:24:57.029682    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:24:57.047222    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:24:57.047233    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:24:57.062935    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:24:57.062948    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:24:57.075649    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:24:57.075662    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:24:57.100248    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:24:57.100260    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:24:57.114979    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:24:57.114991    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:24:57.129336    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:24:57.129348    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:24:57.140656    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:24:57.140668    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:24:57.151909    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:24:57.151921    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:24:57.156462    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:24:57.156469    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:24:57.192344    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:24:57.192356    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:24:57.204861    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:24:57.204872    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:24:57.218302    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:24:57.218313    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:24:57.239217    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:24:57.239227    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:24:57.253029    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:24:57.253042    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:24:57.268314    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:24:57.268326    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:24:59.781983    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:04.784258    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:04.784647    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:04.829509    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:04.829631    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:04.848791    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:04.848872    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:04.863210    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:04.863284    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:04.875371    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:04.875442    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:04.886009    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:04.886075    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:04.896928    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:04.897000    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:04.907336    3801 logs.go:276] 0 containers: []
	W0827 15:25:04.907347    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:04.907399    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:04.917861    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:04.917875    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:04.917879    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:04.932889    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:04.932903    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:04.944448    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:04.944462    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:04.966296    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:04.966310    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:04.984657    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:04.984667    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:04.995952    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:04.995962    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:05.020433    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:05.020441    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:05.032061    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:05.032071    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:05.069026    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:05.069035    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:05.073205    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:05.073214    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:05.108420    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:05.108433    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:05.121551    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:05.121564    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:05.135246    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:05.135258    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:05.146197    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:05.146208    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:05.161419    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:05.161431    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:05.173029    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:05.173041    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:05.187606    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:05.187618    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:07.704334    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:12.706993    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:12.707407    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:12.747569    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:12.747698    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:12.769198    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:12.769304    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:12.784701    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:12.784772    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:12.797461    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:12.797530    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:12.809843    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:12.809904    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:12.820733    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:12.820802    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:12.831160    3801 logs.go:276] 0 containers: []
	W0827 15:25:12.831170    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:12.831227    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:12.841774    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:12.841791    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:12.841796    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:12.854719    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:12.854732    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:12.868874    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:12.868887    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:12.890631    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:12.890644    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:12.907831    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:12.907843    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:12.919296    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:12.919306    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:12.923507    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:12.923513    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:12.937450    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:12.937459    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:12.952738    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:12.952748    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:12.971902    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:12.971920    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:12.997781    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:12.997789    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:13.035962    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:13.035968    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:13.071296    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:13.071310    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:13.085828    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:13.085840    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:13.099283    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:13.099295    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:13.110732    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:13.110743    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:13.122288    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:13.122300    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:15.635662    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:20.638273    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:20.638638    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:20.674438    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:20.674548    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:20.696101    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:20.696195    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:20.712588    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:20.712659    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:20.726190    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:20.726243    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:20.739960    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:20.740027    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:20.751524    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:20.751576    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:20.762196    3801 logs.go:276] 0 containers: []
	W0827 15:25:20.762207    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:20.762247    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:20.773166    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:20.773184    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:20.773189    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:20.784931    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:20.784940    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:20.810252    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:20.810263    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:20.828593    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:20.828604    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:20.865317    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:20.865327    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:20.879280    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:20.879290    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:20.893986    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:20.893999    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:20.910536    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:20.910546    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:20.921764    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:20.921775    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:20.960969    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:20.960979    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:20.965150    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:20.965158    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:20.976799    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:20.976810    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:20.988189    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:20.988202    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:21.002029    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:21.002040    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:21.020111    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:21.020123    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:21.035538    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:21.035551    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:21.049215    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:21.049226    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:23.563280    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:28.565680    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:28.565790    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:28.577367    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:28.577430    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:28.588264    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:28.588332    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:28.599338    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:28.599402    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:28.610594    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:28.610664    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:28.621613    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:28.621676    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:28.632940    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:28.633002    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:28.643415    3801 logs.go:276] 0 containers: []
	W0827 15:25:28.643425    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:28.643477    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:28.654041    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:28.654057    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:28.654063    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:28.666807    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:28.666821    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:28.681217    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:28.681227    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:28.696899    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:28.696908    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:28.718172    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:28.718181    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:28.729801    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:28.729813    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:28.734100    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:28.734105    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:28.745436    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:28.745449    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:28.764230    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:28.764242    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:28.775903    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:28.775915    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:28.800340    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:28.800354    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:28.837512    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:28.837525    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:28.874904    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:28.874915    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:28.888578    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:28.888590    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:28.902082    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:28.902091    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:28.934336    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:28.934345    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:28.951873    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:28.951884    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:31.465456    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:36.467886    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:36.468091    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:36.480336    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:36.480411    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:36.491421    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:36.491502    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:36.502204    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:36.502268    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:36.513789    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:36.513861    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:36.524394    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:36.524461    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:36.537934    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:36.538002    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:36.548398    3801 logs.go:276] 0 containers: []
	W0827 15:25:36.548409    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:36.548466    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:36.560399    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:36.560416    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:36.560422    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:36.596904    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:36.596915    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:36.634755    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:36.634763    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:36.647356    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:36.647366    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:36.665708    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:36.665721    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:36.681560    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:36.681570    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:36.698940    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:36.698950    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:36.703744    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:36.703751    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:36.718070    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:36.718080    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:36.729408    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:36.729420    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:36.744070    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:36.744080    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:36.758920    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:36.758931    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:36.773504    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:36.773514    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:36.787676    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:36.787686    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:36.802359    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:36.802369    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:36.814004    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:36.814015    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:36.838489    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:36.838497    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:39.352504    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:44.354354    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:44.354501    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:44.370138    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:44.370224    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:44.382849    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:44.382921    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:44.393829    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:44.393897    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:44.404137    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:44.404201    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:44.414387    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:44.414457    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:44.425495    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:44.425562    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:44.435608    3801 logs.go:276] 0 containers: []
	W0827 15:25:44.435621    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:44.435676    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:44.445967    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:44.445985    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:44.445991    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:44.457232    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:44.457244    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:44.474243    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:44.474253    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:44.485398    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:44.485411    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:44.496529    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:44.496542    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:44.531662    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:44.531676    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:44.544330    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:44.544344    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:44.571044    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:44.571058    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:44.586421    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:44.586434    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:44.600220    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:44.600234    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:44.604398    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:44.604407    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:44.615625    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:44.615635    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:44.629441    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:44.629452    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:44.654278    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:44.654289    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:44.692520    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:44.692530    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:44.707010    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:44.707024    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:44.720501    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:44.720512    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:47.237101    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:25:52.239342    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:25:52.239769    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:25:52.279310    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:25:52.279449    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:25:52.300478    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:25:52.300578    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:25:52.320060    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:25:52.320141    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:25:52.333896    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:25:52.333972    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:25:52.344707    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:25:52.344778    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:25:52.355655    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:25:52.355723    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:25:52.370034    3801 logs.go:276] 0 containers: []
	W0827 15:25:52.370048    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:25:52.370112    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:25:52.387371    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:25:52.387391    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:25:52.387396    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:25:52.429581    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:25:52.429606    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:25:52.435196    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:25:52.435207    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:25:52.450364    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:25:52.450378    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:25:52.463043    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:25:52.463056    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:25:52.479286    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:25:52.479297    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:25:52.516410    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:25:52.516423    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:25:52.531064    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:25:52.531075    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:25:52.544221    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:25:52.544232    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:25:52.559755    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:25:52.559765    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:25:52.571825    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:25:52.571838    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:25:52.589552    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:25:52.589565    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:25:52.608787    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:25:52.608799    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:25:52.633076    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:25:52.633095    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:25:52.644958    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:25:52.644969    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:25:52.659380    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:25:52.659390    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:25:52.673616    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:25:52.673629    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:25:55.185946    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:00.188054    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:00.188428    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:00.225015    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:00.225155    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:00.247127    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:00.247224    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:00.260918    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:00.260997    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:00.272349    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:00.272420    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:00.283359    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:00.283424    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:00.293851    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:00.293915    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:00.304297    3801 logs.go:276] 0 containers: []
	W0827 15:26:00.304309    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:00.304367    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:00.314714    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:00.314731    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:00.314736    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:00.330876    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:00.330888    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:00.343098    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:00.343110    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:00.361177    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:00.361187    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:00.373403    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:00.373412    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:00.387110    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:00.387123    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:00.401768    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:00.401779    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:00.418119    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:00.418132    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:00.457773    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:00.457786    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:00.472132    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:00.472147    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:00.486459    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:00.486478    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:00.503552    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:00.503563    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:00.515173    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:00.515184    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:00.540061    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:00.540072    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:00.544198    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:00.544207    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:00.582293    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:00.582305    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:00.599536    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:00.599547    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:03.112947    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:08.113084    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:08.113417    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:08.143058    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:08.143203    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:08.162972    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:08.163076    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:08.176942    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:08.177022    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:08.188626    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:08.188696    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:08.198920    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:08.198990    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:08.209644    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:08.209716    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:08.219949    3801 logs.go:276] 0 containers: []
	W0827 15:26:08.219960    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:08.220017    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:08.230632    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:08.230649    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:08.230654    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:08.234994    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:08.235001    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:08.248619    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:08.248628    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:08.259414    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:08.259425    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:08.270867    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:08.270878    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:08.288241    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:08.288252    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:08.312622    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:08.312631    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:08.324563    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:08.324576    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:08.337648    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:08.337662    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:08.376742    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:08.376749    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:08.412311    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:08.412323    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:08.427066    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:08.427077    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:08.442557    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:08.442569    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:08.454503    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:08.454515    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:08.467209    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:08.467220    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:08.480904    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:08.480915    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:08.498767    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:08.498780    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:11.011985    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:16.014587    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:16.014780    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:16.027826    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:16.027899    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:16.038000    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:16.038067    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:16.048238    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:16.048303    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:16.064136    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:16.064207    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:16.074653    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:16.074716    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:16.086142    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:16.086208    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:16.096576    3801 logs.go:276] 0 containers: []
	W0827 15:26:16.096586    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:16.096637    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:16.107045    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:16.107064    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:16.107069    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:16.121979    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:16.121988    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:16.134498    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:16.134514    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:16.138961    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:16.138967    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:16.153062    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:16.153075    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:16.164196    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:16.164208    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:16.182355    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:16.182366    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:16.193976    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:16.193986    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:16.209171    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:16.209182    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:16.248665    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:16.248673    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:16.262937    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:16.262947    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:16.274009    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:16.274020    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:16.298353    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:16.298360    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:16.334869    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:16.334883    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:16.355910    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:16.355923    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:16.367085    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:16.367096    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:16.379567    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:16.379581    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:18.895537    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:23.898075    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:23.898321    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:23.924956    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:23.925080    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:23.952435    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:23.952516    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:23.964865    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:23.964935    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:23.976988    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:23.977058    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:23.988428    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:23.988500    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:23.999749    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:23.999834    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:24.010758    3801 logs.go:276] 0 containers: []
	W0827 15:26:24.010769    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:24.010827    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:24.026449    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:24.026467    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:24.026473    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:24.039696    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:24.039709    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:24.054743    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:24.054755    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:24.090142    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:24.090155    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:24.103822    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:24.103834    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:24.114649    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:24.114662    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:24.134113    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:24.134127    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:24.146178    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:24.146193    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:24.168790    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:24.168797    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:24.182945    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:24.182956    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:24.197596    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:24.197608    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:24.214757    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:24.214767    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:24.226134    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:24.226146    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:24.237022    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:24.237035    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:24.273837    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:24.273844    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:24.277859    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:24.277866    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:24.289547    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:24.289558    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:26.803308    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:31.805473    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:31.805581    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:31.817349    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:31.817419    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:31.828763    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:31.828833    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:31.840363    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:31.840433    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:31.852922    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:31.852996    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:31.864718    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:31.864785    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:31.876976    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:31.877048    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:31.890576    3801 logs.go:276] 0 containers: []
	W0827 15:26:31.890587    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:31.890643    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:31.903154    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:31.903176    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:31.903183    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:31.931468    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:31.931487    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:31.949674    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:31.949688    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:31.961197    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:31.961207    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:31.977199    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:31.977209    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:31.989195    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:31.989205    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:32.006655    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:32.006665    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:32.018165    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:32.018173    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:32.056561    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:32.056572    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:32.072087    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:32.072097    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:32.086128    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:32.086136    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:32.098007    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:32.098017    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:32.133440    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:32.133450    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:32.149369    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:32.149381    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:32.163707    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:32.163718    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:32.168390    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:32.168397    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:32.179798    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:32.179807    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:34.699161    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:39.701576    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:39.701725    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:39.718890    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:39.718967    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:39.732071    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:39.732145    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:39.742977    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:39.743041    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:39.757971    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:39.758044    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:39.776706    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:39.776782    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:39.787658    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:39.787724    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:39.797768    3801 logs.go:276] 0 containers: []
	W0827 15:26:39.797778    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:39.797833    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:39.808025    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:39.808042    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:39.808047    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:39.820753    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:39.820763    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:39.825090    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:39.825097    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:39.839325    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:39.839339    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:39.855198    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:39.855206    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:39.871160    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:39.871172    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:39.883406    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:39.883417    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:39.897883    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:39.897895    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:39.938803    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:39.938813    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:39.950586    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:39.950597    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:39.970482    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:39.970493    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:39.985509    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:39.985523    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:40.009719    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:40.009736    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:40.046210    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:40.046223    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:40.061224    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:40.061234    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:40.073723    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:40.073734    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:40.091997    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:40.092006    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:42.607867    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:47.609892    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:47.610022    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:47.621610    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:47.621698    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:47.633412    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:47.633486    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:47.656470    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:47.656546    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:47.671999    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:47.672074    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:47.687397    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:47.687468    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:47.702630    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:47.702702    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:47.713918    3801 logs.go:276] 0 containers: []
	W0827 15:26:47.713931    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:47.713993    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:47.726221    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:47.726239    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:47.726244    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:47.741463    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:47.741472    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:47.753642    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:47.753655    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:47.770918    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:47.770933    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:47.811898    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:47.811925    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:47.817703    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:47.817713    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:47.859513    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:47.859525    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:47.880513    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:47.880530    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:47.892658    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:47.892672    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:47.909266    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:47.909285    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:47.922718    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:47.922731    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:47.950080    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:47.950089    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:47.968245    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:47.968258    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:47.990457    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:47.990467    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:48.004813    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:48.004826    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:48.023435    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:48.023450    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:48.035428    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:48.035444    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:50.555303    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:55.557331    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:55.557428    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:55.568341    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:55.568415    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:55.579383    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:55.579450    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:55.592023    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:55.592089    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:55.602117    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:55.602190    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:55.613309    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:55.613377    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:55.625130    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:55.625194    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:55.635264    3801 logs.go:276] 0 containers: []
	W0827 15:26:55.635275    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:55.635336    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:55.646152    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:55.646168    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:55.646174    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:55.650408    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:55.650417    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:55.684600    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:55.684610    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:55.698661    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:55.698670    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:55.711820    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:55.711830    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:55.726306    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:55.726319    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:55.744290    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:55.744299    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:55.758013    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:55.758025    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:55.772199    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:55.772209    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:55.783558    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:55.783570    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:55.807397    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:55.807404    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:55.818886    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:55.818896    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:55.857323    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:55.857331    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:55.869787    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:55.869797    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:55.880951    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:55.880962    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:55.896325    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:55.896335    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:55.908028    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:55.908038    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:58.421507    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:03.423621    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:03.423771    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:03.435268    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:27:03.435348    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:03.448089    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:27:03.448164    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:03.458923    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:27:03.458994    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:03.469010    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:27:03.469077    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:03.480008    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:27:03.480070    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:03.490817    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:27:03.490885    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:03.500770    3801 logs.go:276] 0 containers: []
	W0827 15:27:03.500782    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:03.500839    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:03.511780    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:27:03.511797    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:27:03.511803    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:27:03.527723    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:27:03.527735    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:03.539943    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:27:03.539954    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:27:03.554731    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:03.554741    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:03.588425    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:27:03.588435    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:27:03.601406    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:27:03.601416    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:27:03.618931    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:03.618940    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:03.657577    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:27:03.657586    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:27:03.677696    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:27:03.677707    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:27:03.692246    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:27:03.692257    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:27:03.703887    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:27:03.703898    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:27:03.717832    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:27:03.717846    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:27:03.733812    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:27:03.733825    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:27:03.745909    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:27:03.745919    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:27:03.757655    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:27:03.757666    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:27:03.769534    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:03.769546    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:03.792205    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:03.792214    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:06.298237    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:11.300306    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:11.300389    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:11.311131    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:27:11.311208    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:11.328069    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:27:11.328146    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:11.338309    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:27:11.338375    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:11.349662    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:27:11.349736    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:11.360043    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:27:11.360117    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:11.370104    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:27:11.370173    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:11.380104    3801 logs.go:276] 0 containers: []
	W0827 15:27:11.380114    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:11.380176    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:11.390327    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:27:11.390344    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:27:11.390349    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:27:11.404315    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:27:11.404325    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:27:11.417418    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:27:11.417430    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:27:11.432965    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:27:11.432974    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:27:11.445280    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:27:11.445290    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:27:11.456722    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:11.456733    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:11.480472    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:11.480479    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:11.485011    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:11.485020    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:11.520238    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:27:11.520249    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:27:11.533683    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:27:11.533697    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:27:11.548000    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:11.548010    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:11.586749    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:27:11.586758    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:27:11.603386    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:27:11.603396    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:27:11.619353    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:27:11.619362    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:27:11.631580    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:27:11.631591    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:27:11.645362    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:27:11.645373    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:27:11.664024    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:27:11.664034    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:14.178021    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:19.180055    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:19.180164    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:19.192602    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:27:19.192676    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:19.205243    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:27:19.205314    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:19.218652    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:27:19.218724    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:19.229765    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:27:19.229835    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:19.240177    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:27:19.240251    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:19.251147    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:27:19.251217    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:19.263811    3801 logs.go:276] 0 containers: []
	W0827 15:27:19.263823    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:19.263885    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:19.274446    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:27:19.274462    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:19.274467    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:19.298578    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:19.298588    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:19.341799    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:27:19.341813    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:27:19.356487    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:27:19.356500    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:27:19.368215    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:27:19.368228    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:27:19.380820    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:19.380834    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:19.418651    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:27:19.418663    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:27:19.433110    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:27:19.433122    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:27:19.447124    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:27:19.447137    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:27:19.465875    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:27:19.465886    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:27:19.480000    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:27:19.480009    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:27:19.492376    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:27:19.492386    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:27:19.504268    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:27:19.504281    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:27:19.516059    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:27:19.516073    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:19.534214    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:19.534225    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:19.539031    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:27:19.539038    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:27:19.550596    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:27:19.550606    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:27:22.069600    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:27.071733    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:27.071852    3801 kubeadm.go:597] duration metric: took 4m4.305828458s to restartPrimaryControlPlane
	W0827 15:27:27.071953    3801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0827 15:27:27.071999    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0827 15:27:28.067346    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 15:27:28.072646    3801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:27:28.075309    3801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:27:28.077921    3801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 15:27:28.077926    3801 kubeadm.go:157] found existing configuration files:
	
	I0827 15:27:28.077950    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/admin.conf
	I0827 15:27:28.080928    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 15:27:28.080950    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:27:28.083460    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/kubelet.conf
	I0827 15:27:28.086243    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 15:27:28.086269    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:27:28.089242    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/controller-manager.conf
	I0827 15:27:28.091735    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 15:27:28.091759    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:27:28.094380    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/scheduler.conf
	I0827 15:27:28.097264    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 15:27:28.097285    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:27:28.099582    3801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 15:27:28.116487    3801 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0827 15:27:28.116579    3801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 15:27:28.167178    3801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 15:27:28.167279    3801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 15:27:28.167342    3801 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 15:27:28.218833    3801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 15:27:28.223045    3801 out.go:235]   - Generating certificates and keys ...
	I0827 15:27:28.223081    3801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 15:27:28.223115    3801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 15:27:28.223167    3801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 15:27:28.223201    3801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 15:27:28.223234    3801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 15:27:28.223259    3801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 15:27:28.223294    3801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 15:27:28.223326    3801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 15:27:28.223368    3801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 15:27:28.223405    3801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 15:27:28.223424    3801 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 15:27:28.223453    3801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 15:27:28.301182    3801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 15:27:28.513891    3801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 15:27:28.567596    3801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 15:27:28.744152    3801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 15:27:28.773390    3801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 15:27:28.773746    3801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 15:27:28.773769    3801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 15:27:28.847916    3801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 15:27:28.851497    3801 out.go:235]   - Booting up control plane ...
	I0827 15:27:28.851544    3801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 15:27:28.851583    3801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 15:27:28.851614    3801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 15:27:28.856069    3801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 15:27:28.857006    3801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 15:27:33.358352    3801 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501382 seconds
	I0827 15:27:33.358467    3801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 15:27:33.363442    3801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 15:27:33.882935    3801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 15:27:33.883248    3801 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-301000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 15:27:34.386421    3801 kubeadm.go:310] [bootstrap-token] Using token: eq0u6u.znq3ywqbbt29bia7
	I0827 15:27:34.389828    3801 out.go:235]   - Configuring RBAC rules ...
	I0827 15:27:34.389890    3801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 15:27:34.389942    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 15:27:34.397414    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 15:27:34.398115    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 15:27:34.399078    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 15:27:34.399973    3801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 15:27:34.403206    3801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 15:27:34.569523    3801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 15:27:34.792101    3801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 15:27:34.793047    3801 kubeadm.go:310] 
	I0827 15:27:34.793084    3801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 15:27:34.793088    3801 kubeadm.go:310] 
	I0827 15:27:34.793124    3801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 15:27:34.793127    3801 kubeadm.go:310] 
	I0827 15:27:34.793139    3801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 15:27:34.793168    3801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 15:27:34.793195    3801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 15:27:34.793198    3801 kubeadm.go:310] 
	I0827 15:27:34.793224    3801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 15:27:34.793227    3801 kubeadm.go:310] 
	I0827 15:27:34.793249    3801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 15:27:34.793251    3801 kubeadm.go:310] 
	I0827 15:27:34.793279    3801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 15:27:34.793316    3801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 15:27:34.793446    3801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 15:27:34.793450    3801 kubeadm.go:310] 
	I0827 15:27:34.793490    3801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 15:27:34.793535    3801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 15:27:34.793544    3801 kubeadm.go:310] 
	I0827 15:27:34.793587    3801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eq0u6u.znq3ywqbbt29bia7 \
	I0827 15:27:34.793641    3801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 \
	I0827 15:27:34.793655    3801 kubeadm.go:310] 	--control-plane 
	I0827 15:27:34.793657    3801 kubeadm.go:310] 
	I0827 15:27:34.793697    3801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 15:27:34.793700    3801 kubeadm.go:310] 
	I0827 15:27:34.793768    3801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eq0u6u.znq3ywqbbt29bia7 \
	I0827 15:27:34.793824    3801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 
	I0827 15:27:34.793894    3801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 15:27:34.793902    3801 cni.go:84] Creating CNI manager for ""
	I0827 15:27:34.793910    3801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:27:34.800088    3801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 15:27:34.807218    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 15:27:34.810578    3801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 15:27:34.815601    3801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 15:27:34.815689    3801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 15:27:34.815693    3801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-301000 minikube.k8s.io/updated_at=2024_08_27T15_27_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=running-upgrade-301000 minikube.k8s.io/primary=true
	I0827 15:27:34.861558    3801 ops.go:34] apiserver oom_adj: -16
	I0827 15:27:34.861569    3801 kubeadm.go:1113] duration metric: took 45.930542ms to wait for elevateKubeSystemPrivileges
	I0827 15:27:34.861580    3801 kubeadm.go:394] duration metric: took 4m12.110122208s to StartCluster
	I0827 15:27:34.861591    3801 settings.go:142] acquiring lock: {Name:mk8039639095abb20902a2ce8e0a004770b18340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:27:34.861678    3801 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:27:34.862044    3801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:27:34.862267    3801 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:27:34.862286    3801 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 15:27:34.862336    3801 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-301000"
	I0827 15:27:34.862346    3801 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-301000"
	W0827 15:27:34.862350    3801 addons.go:243] addon storage-provisioner should already be in state true
	I0827 15:27:34.862352    3801 config.go:182] Loaded profile config "running-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:27:34.862362    3801 host.go:66] Checking if "running-upgrade-301000" exists ...
	I0827 15:27:34.862382    3801 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-301000"
	I0827 15:27:34.862409    3801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-301000"
	I0827 15:27:34.863260    3801 kapi.go:59] client config for running-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027b7eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:27:34.863385    3801 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-301000"
	W0827 15:27:34.863389    3801 addons.go:243] addon default-storageclass should already be in state true
	I0827 15:27:34.863397    3801 host.go:66] Checking if "running-upgrade-301000" exists ...
	I0827 15:27:34.866169    3801 out.go:177] * Verifying Kubernetes components...
	I0827 15:27:34.866459    3801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 15:27:34.869443    3801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 15:27:34.869449    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:27:34.873109    3801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:27:34.877093    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:27:34.880145    3801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:27:34.880151    3801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 15:27:34.880157    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:27:34.958088    3801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:27:34.963752    3801 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:27:34.963801    3801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:27:34.967913    3801 api_server.go:72] duration metric: took 105.638417ms to wait for apiserver process to appear ...
	I0827 15:27:34.967922    3801 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:27:34.967930    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:34.984132    3801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 15:27:35.001186    3801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:27:35.317727    3801 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 15:27:35.317741    3801 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 15:27:39.969935    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:39.969990    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:44.970268    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:44.970295    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:49.970493    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:49.970521    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:54.970821    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:54.970876    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:59.971378    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:59.971435    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:04.972106    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:04.972135    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0827 15:28:05.318104    3801 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0827 15:28:05.323353    3801 out.go:177] * Enabled addons: storage-provisioner
	I0827 15:28:05.331281    3801 addons.go:510] duration metric: took 30.47000425s for enable addons: enabled=[storage-provisioner]
	I0827 15:28:09.973070    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:09.973143    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:14.974619    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:14.974665    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:19.975764    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:19.975818    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:24.977941    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:24.977989    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:29.978874    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:29.978896    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:34.980096    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:34.980254    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:34.993005    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:34.993082    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:35.008829    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:35.008898    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:35.019755    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:35.019834    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:35.035032    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:35.035100    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:35.046205    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:35.046279    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:35.057059    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:35.057131    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:35.071398    3801 logs.go:276] 0 containers: []
	W0827 15:28:35.071409    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:35.071470    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:35.086240    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:35.086254    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:35.086260    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:35.101285    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:35.101297    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:35.115061    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:35.115074    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:35.126715    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:35.126727    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:28:35.145040    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:35.145051    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:35.164670    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:35.164684    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:35.198772    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:35.198785    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:35.202989    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:35.202995    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:35.237172    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:35.237185    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:35.249251    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:35.249265    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:35.261325    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:35.261338    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:35.275955    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:35.275966    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:35.300230    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:35.300237    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:37.812072    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:42.812978    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:42.813234    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:42.837160    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:42.837258    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:42.853448    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:42.853544    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:42.866454    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:42.866528    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:42.878193    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:42.878261    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:42.891402    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:42.891472    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:42.902279    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:42.902350    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:42.912995    3801 logs.go:276] 0 containers: []
	W0827 15:28:42.913006    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:42.913067    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:42.923341    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:42.923357    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:42.923364    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:42.937885    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:42.937898    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:42.952605    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:42.952618    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:42.964236    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:42.964250    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:42.976078    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:42.976088    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:42.987066    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:42.987079    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:43.011220    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:43.011238    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:43.044470    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:43.044479    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:43.081526    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:43.081537    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:43.093496    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:43.093506    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:43.109471    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:43.109482    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:28:43.128683    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:43.128693    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:43.141456    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:43.141467    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:45.646140    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:50.648206    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:50.648410    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:50.673687    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:50.673782    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:50.688259    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:50.688328    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:50.700061    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:50.700138    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:50.711394    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:50.711454    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:50.721663    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:50.721734    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:50.731921    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:50.731990    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:50.742154    3801 logs.go:276] 0 containers: []
	W0827 15:28:50.742168    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:50.742233    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:50.752717    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:50.752732    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:50.752737    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:50.767151    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:50.767163    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:50.783943    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:50.783954    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:28:50.801368    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:50.801381    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:50.834874    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:50.834884    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:50.838961    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:50.838969    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:50.874105    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:50.874120    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:50.888087    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:50.888100    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:50.899341    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:50.899353    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:50.910947    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:50.910956    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:50.922237    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:50.922248    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:50.933141    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:50.933151    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:50.957088    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:50.957098    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:53.469410    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:58.470003    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:58.470483    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:58.509316    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:58.509451    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:58.531760    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:58.531861    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:58.547468    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:58.547554    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:58.561581    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:58.561657    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:58.573174    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:58.573247    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:58.584158    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:58.584240    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:58.594619    3801 logs.go:276] 0 containers: []
	W0827 15:28:58.594633    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:58.594682    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:58.606478    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:58.606492    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:58.606497    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:58.618221    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:58.618232    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:58.643006    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:58.643018    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:58.654746    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:58.654757    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:58.690241    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:58.690249    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:58.704767    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:58.704777    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:58.718703    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:58.718714    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:58.730869    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:58.730880    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:58.742987    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:58.742997    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:58.747384    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:58.747393    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:58.781655    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:58.781670    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:58.793653    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:58.793666    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:58.808470    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:58.808481    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:01.328690    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:06.329815    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:06.329920    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:06.341104    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:06.341177    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:06.352293    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:06.352360    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:06.362917    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:06.362988    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:06.374031    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:06.374100    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:06.384153    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:06.384227    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:06.395306    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:06.395373    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:06.405342    3801 logs.go:276] 0 containers: []
	W0827 15:29:06.405353    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:06.405409    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:06.416004    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:06.416027    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:06.416034    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:06.451368    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:06.451388    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:06.470795    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:06.470806    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:06.481904    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:06.481915    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:06.493565    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:06.493575    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:06.519034    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:06.519044    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:06.524076    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:06.524084    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:06.559049    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:06.559061    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:06.573627    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:06.573637    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:06.585708    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:06.585720    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:06.607075    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:06.607087    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:06.618928    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:06.618939    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:06.636593    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:06.636605    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:09.150469    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:14.152607    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:14.152711    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:14.163438    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:14.163510    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:14.177220    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:14.177286    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:14.188090    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:14.188151    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:14.200254    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:14.200322    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:14.211474    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:14.211540    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:14.222577    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:14.222653    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:14.233713    3801 logs.go:276] 0 containers: []
	W0827 15:29:14.233725    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:14.233782    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:14.244097    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:14.244115    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:14.244120    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:14.257993    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:14.258006    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:14.276339    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:14.276348    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:14.310692    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:14.310702    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:14.325977    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:14.325988    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:14.337581    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:14.337593    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:14.349291    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:14.349301    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:14.363801    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:14.363811    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:14.378978    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:14.378988    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:14.390027    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:14.390037    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:14.415034    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:14.415042    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:14.448986    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:14.448996    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:14.453263    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:14.453269    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:16.966823    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:21.967278    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:21.967645    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:22.004630    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:22.004771    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:22.024921    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:22.025012    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:22.040444    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:22.040514    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:22.058377    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:22.058449    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:22.069154    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:22.069231    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:22.080263    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:22.080333    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:22.090908    3801 logs.go:276] 0 containers: []
	W0827 15:29:22.090925    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:22.090989    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:22.104038    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:22.104054    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:22.104058    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:22.115943    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:22.115957    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:22.133738    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:22.133748    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:22.167202    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:22.167210    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:22.171738    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:22.171745    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:22.205561    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:22.205575    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:22.223509    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:22.223522    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:22.236542    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:22.236552    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:22.248822    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:22.248832    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:22.262773    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:22.262785    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:22.281794    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:22.281805    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:22.293755    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:22.293768    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:22.317344    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:22.317352    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:24.831463    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:29.833550    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:29.833663    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:29.847121    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:29.847201    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:29.861431    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:29.861499    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:29.872065    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:29.872137    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:29.882780    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:29.882853    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:29.893113    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:29.893183    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:29.905317    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:29.905387    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:29.915436    3801 logs.go:276] 0 containers: []
	W0827 15:29:29.915453    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:29.915516    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:29.937288    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:29.937303    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:29.937308    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:29.948753    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:29.948764    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:29.960215    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:29.960227    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:29.983305    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:29.983316    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:29.994920    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:29.994932    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:30.013499    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:30.013509    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:30.018210    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:30.018216    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:30.054970    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:30.054986    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:30.069760    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:30.069770    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:30.081785    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:30.081797    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:30.106666    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:30.106676    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:30.142092    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:30.142104    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:30.156070    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:30.156082    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:32.669892    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:37.672094    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:37.672346    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:37.693969    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:37.694084    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:37.710980    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:37.711057    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:37.723165    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:37.723236    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:37.734082    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:37.734144    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:37.744984    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:37.745050    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:37.755766    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:37.755834    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:37.766003    3801 logs.go:276] 0 containers: []
	W0827 15:29:37.766014    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:37.766072    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:37.776405    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:37.776421    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:37.776426    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:37.788462    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:37.788474    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:37.800265    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:37.800276    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:37.814721    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:37.814731    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:37.849140    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:37.849151    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:37.867648    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:37.867662    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:37.883101    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:37.883111    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:37.894763    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:37.894775    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:37.913078    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:37.913088    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:37.925166    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:37.925176    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:37.948483    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:37.948491    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:37.960067    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:37.960078    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:37.993187    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:37.993195    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:40.499854    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:45.502307    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:45.502543    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:45.529715    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:45.529851    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:45.547377    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:45.547462    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:45.561821    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:45.561893    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:45.573107    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:45.573165    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:45.583125    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:45.583204    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:45.593283    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:45.593343    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:45.603353    3801 logs.go:276] 0 containers: []
	W0827 15:29:45.603366    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:45.603427    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:45.614255    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:45.614268    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:45.614273    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:45.628487    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:45.628500    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:45.639919    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:45.639933    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:45.651525    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:45.651537    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:45.668713    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:45.668723    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:45.680234    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:45.680244    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:45.703611    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:45.703619    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:45.715783    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:45.715794    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:45.720866    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:45.720873    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:45.756023    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:45.756035    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:45.774466    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:45.774476    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:45.785963    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:45.785975    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:45.828150    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:45.828162    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:48.365238    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:53.367314    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:53.367522    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:53.392861    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:53.392979    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:53.410826    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:53.410915    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:53.425208    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:29:53.425292    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:53.437390    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:53.437449    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:53.451648    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:53.451720    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:53.462094    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:53.462158    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:53.471920    3801 logs.go:276] 0 containers: []
	W0827 15:29:53.471933    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:53.471986    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:53.482494    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:53.482512    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:53.482517    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:53.497624    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:53.497636    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:53.502134    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:53.502141    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:53.516545    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:29:53.516557    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:29:53.528846    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:53.528858    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:53.540790    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:53.540801    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:53.552925    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:53.552936    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:53.566543    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:53.566557    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:53.590765    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:53.590781    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:53.604842    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:29:53.604857    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:29:53.616355    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:53.616367    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:53.635092    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:53.635106    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:53.652817    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:53.652831    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:53.688363    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:53.688371    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:53.722702    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:53.722720    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:56.236692    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:01.238860    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:01.239060    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:01.257552    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:01.257657    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:01.272077    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:01.272154    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:01.284305    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:01.284384    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:01.295099    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:01.295180    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:01.305776    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:01.305856    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:01.316646    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:01.316722    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:01.327526    3801 logs.go:276] 0 containers: []
	W0827 15:30:01.327538    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:01.327607    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:01.338556    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:01.338574    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:01.338580    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:01.363855    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:01.363866    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:01.368825    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:01.368832    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:01.382562    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:01.382573    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:01.400109    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:01.400120    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:01.411747    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:01.411759    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:01.423683    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:01.423695    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:01.459234    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:01.459245    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:01.470479    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:01.470490    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:01.483442    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:01.483454    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:01.498849    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:01.498861    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:01.518370    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:01.518383    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:01.532061    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:01.532073    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:01.544716    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:01.544727    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:01.581378    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:01.581389    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:04.096252    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:09.098445    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:09.098671    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:09.115975    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:09.116062    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:09.129403    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:09.129477    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:09.141620    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:09.141692    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:09.152273    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:09.152350    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:09.163130    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:09.163192    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:09.181047    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:09.181112    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:09.194719    3801 logs.go:276] 0 containers: []
	W0827 15:30:09.194731    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:09.194788    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:09.205311    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:09.205341    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:09.205346    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:09.240687    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:09.240701    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:09.255204    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:09.255216    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:09.270722    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:09.270736    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:09.281869    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:09.281880    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:09.296411    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:09.296421    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:09.330685    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:09.330699    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:09.335616    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:09.335623    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:09.347559    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:09.347570    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:09.359783    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:09.359793    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:09.376322    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:09.376332    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:09.388471    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:09.388481    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:09.406023    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:09.406034    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:09.427358    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:09.427371    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:09.452514    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:09.452523    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:11.966434    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:16.967055    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:16.967228    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:16.983607    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:16.983706    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:16.995907    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:16.995977    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:17.007545    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:17.007618    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:17.019012    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:17.019080    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:17.029541    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:17.029606    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:17.048500    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:17.048565    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:17.058598    3801 logs.go:276] 0 containers: []
	W0827 15:30:17.058609    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:17.058658    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:17.068819    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:17.068838    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:17.068843    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:17.086837    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:17.086848    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:17.098297    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:17.098308    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:17.114992    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:17.115004    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:17.129851    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:17.129862    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:17.141508    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:17.141519    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:17.175729    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:17.175736    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:17.210367    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:17.210378    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:17.222272    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:17.222282    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:17.234625    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:17.234636    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:17.260161    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:17.260169    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:17.264257    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:17.264264    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:17.282955    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:17.282965    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:17.296742    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:17.296752    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:17.307772    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:17.307784    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:19.821919    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:24.824132    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:24.824268    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:24.837470    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:24.837539    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:24.848001    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:24.848072    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:24.858432    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:24.858503    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:24.873168    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:24.873246    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:24.883359    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:24.883432    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:24.893721    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:24.893784    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:24.903997    3801 logs.go:276] 0 containers: []
	W0827 15:30:24.904007    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:24.904067    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:24.914049    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:24.914068    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:24.914074    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:24.949541    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:24.949555    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:24.961531    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:24.961544    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:24.973547    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:24.973558    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:25.007415    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:25.007423    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:25.022460    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:25.022471    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:25.034498    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:25.034511    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:25.049248    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:25.049261    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:25.061078    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:25.061089    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:25.080924    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:25.080934    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:25.098203    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:25.098216    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:25.122028    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:25.122037    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:25.135089    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:25.135102    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:25.139586    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:25.139594    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:25.151203    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:25.151215    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:27.664685    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:32.666801    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:32.666908    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:32.677924    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:32.678000    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:32.688805    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:32.688874    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:32.700138    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:32.700205    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:32.710594    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:32.710662    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:32.724049    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:32.724113    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:32.734555    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:32.734615    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:32.744827    3801 logs.go:276] 0 containers: []
	W0827 15:30:32.744838    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:32.744887    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:32.755174    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:32.755192    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:32.755198    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:32.766814    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:32.766828    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:32.778516    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:32.778528    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:32.793118    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:32.793129    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:32.804383    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:32.804394    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:32.821095    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:32.821109    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:32.855024    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:32.855035    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:32.867548    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:32.867562    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:32.883139    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:32.883151    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:32.898168    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:32.898180    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:32.924846    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:32.924861    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:32.936622    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:32.936635    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:32.941403    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:32.941410    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:32.955796    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:32.955806    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:32.980964    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:32.980972    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:35.520727    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:40.522587    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:40.522843    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:40.550903    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:40.551025    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:40.570593    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:40.570675    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:40.583249    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:40.583320    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:40.594535    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:40.594605    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:40.604533    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:40.604603    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:40.615520    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:40.615587    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:40.625735    3801 logs.go:276] 0 containers: []
	W0827 15:30:40.625751    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:40.625799    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:40.640220    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:40.640237    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:40.640244    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:40.675863    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:40.675874    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:40.694904    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:40.694915    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:40.713578    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:40.713588    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:40.725440    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:40.725451    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:40.737376    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:40.737389    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:40.749082    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:40.749094    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:40.761191    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:40.761202    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:40.787226    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:40.787244    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:40.801316    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:40.801332    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:40.812933    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:40.812943    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:40.825491    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:40.825503    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:40.837445    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:40.837457    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:40.864743    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:40.864755    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:40.899610    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:40.899618    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:43.406253    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:48.408570    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:48.408788    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:48.430818    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:48.430937    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:48.446613    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:48.446689    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:48.459674    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:48.459744    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:48.470947    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:48.471009    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:48.486479    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:48.486550    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:48.496738    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:48.496806    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:48.507130    3801 logs.go:276] 0 containers: []
	W0827 15:30:48.507142    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:48.507203    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:48.517796    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:48.517813    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:48.517818    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:48.542518    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:48.542527    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:48.557052    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:48.557067    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:48.561353    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:48.561362    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:48.596820    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:48.596832    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:48.610180    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:48.610193    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:48.623423    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:48.623436    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:48.641072    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:48.641082    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:48.655107    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:48.655117    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:48.667671    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:48.667683    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:48.689443    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:48.689462    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:48.707880    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:48.707892    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:48.743396    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:48.743407    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:48.755239    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:48.755250    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:48.768020    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:48.768034    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:51.285094    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:56.287290    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:56.287407    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:56.298833    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:56.298908    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:56.310039    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:56.310129    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:56.321403    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:56.321472    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:56.332280    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:56.332358    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:56.343410    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:56.343480    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:56.354307    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:56.354378    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:56.365138    3801 logs.go:276] 0 containers: []
	W0827 15:30:56.365149    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:56.365214    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:56.376063    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:56.376081    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:56.376086    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:56.391557    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:56.391568    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:56.409285    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:56.409296    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:56.421972    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:56.421983    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:56.446532    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:56.446552    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:56.462132    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:56.462142    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:56.474298    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:56.474312    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:56.479388    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:56.479400    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:56.516686    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:56.516699    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:56.529168    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:56.529180    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:56.541802    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:56.541812    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:56.561051    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:56.561062    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:56.579871    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:56.579884    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:56.616301    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:56.616314    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:56.630559    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:56.630571    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:59.146328    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:04.148463    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:04.148747    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:04.168581    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:04.168668    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:04.187037    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:04.187119    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:04.198934    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:04.199001    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:04.209479    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:04.209550    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:04.220320    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:04.220384    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:04.231229    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:04.231290    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:04.241928    3801 logs.go:276] 0 containers: []
	W0827 15:31:04.241938    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:04.241988    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:04.252069    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:04.252084    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:04.252090    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:04.266323    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:04.266332    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:04.281289    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:04.281299    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:04.305545    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:04.305553    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:04.309948    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:04.309955    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:04.322644    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:04.322656    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:04.334349    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:04.334365    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:04.345649    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:04.345660    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:04.357184    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:04.357195    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:04.374628    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:04.374639    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:04.386137    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:04.386148    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:04.421535    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:04.421545    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:04.457814    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:04.457827    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:04.472954    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:04.472965    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:04.486592    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:04.486603    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:06.999793    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:12.001872    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:12.001965    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:12.018444    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:12.018513    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:12.029436    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:12.029505    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:12.040735    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:12.040811    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:12.051506    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:12.051575    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:12.061904    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:12.061967    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:12.072627    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:12.072695    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:12.082835    3801 logs.go:276] 0 containers: []
	W0827 15:31:12.082845    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:12.082904    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:12.093726    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:12.093741    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:12.093746    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:12.108332    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:12.108342    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:12.119974    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:12.119987    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:12.139652    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:12.139662    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:12.151242    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:12.151252    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:12.155850    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:12.155858    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:12.167501    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:12.167512    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:12.184366    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:12.184377    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:12.195593    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:12.195603    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:12.213314    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:12.213325    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:12.225324    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:12.225335    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:12.260668    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:12.260675    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:12.295511    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:12.295521    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:12.310065    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:12.310078    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:12.334639    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:12.334656    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:14.854104    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:19.854818    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:19.854941    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:19.871478    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:19.871555    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:19.882618    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:19.882690    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:19.893463    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:19.893536    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:19.904582    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:19.904650    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:19.916102    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:19.916175    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:19.935730    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:19.935798    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:19.945797    3801 logs.go:276] 0 containers: []
	W0827 15:31:19.945807    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:19.945858    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:19.956642    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:19.956660    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:19.956666    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:19.968982    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:19.968993    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:19.980791    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:19.980800    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:19.992880    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:19.992890    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:20.004333    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:20.004345    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:20.018334    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:20.018353    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:20.030376    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:20.030387    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:20.042088    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:20.042100    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:20.057157    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:20.057168    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:20.075638    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:20.075647    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:20.080231    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:20.080238    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:20.112976    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:20.112984    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:20.126868    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:20.126879    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:20.139590    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:20.139603    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:20.164298    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:20.164306    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:22.705847    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:27.707911    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:27.708117    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:27.725657    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:27.725746    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:27.739317    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:27.739392    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:27.750967    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:27.751034    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:27.761454    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:27.761526    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:27.771548    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:27.771619    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:27.782805    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:27.782879    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:27.793156    3801 logs.go:276] 0 containers: []
	W0827 15:31:27.793169    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:27.793224    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:27.802936    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:27.802955    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:27.802960    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:27.838959    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:27.838972    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:27.851529    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:27.851540    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:27.871838    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:27.871852    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:27.883195    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:27.883207    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:27.887683    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:27.887690    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:27.903222    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:27.903233    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:27.938918    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:27.938930    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:27.954706    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:27.954717    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:27.978820    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:27.978834    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:27.990607    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:27.990621    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:28.002268    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:28.002283    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:28.016307    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:28.016316    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:28.027947    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:28.027956    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:28.039281    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:28.039294    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:30.551913    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:35.554233    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:35.558832    3801 out.go:201] 
	W0827 15:31:35.562973    3801 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0827 15:31:35.562991    3801 out.go:270] * 
	* 
	W0827 15:31:35.564147    3801 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:31:35.574898    3801 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-301000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-27 15:31:35.689957 -0700 PDT m=+3304.389421585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-301000 -n running-upgrade-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-301000 -n running-upgrade-301000: exit status 2 (15.737796084s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-301000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-671000          | force-systemd-flag-671000 | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-232000              | force-systemd-env-232000  | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-232000           | force-systemd-env-232000  | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT | 27 Aug 24 15:21 PDT |
	| start   | -p docker-flags-032000                | docker-flags-032000       | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-671000             | force-systemd-flag-671000 | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-671000          | force-systemd-flag-671000 | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT | 27 Aug 24 15:21 PDT |
	| start   | -p cert-expiration-658000             | cert-expiration-658000    | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-032000 ssh               | docker-flags-032000       | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-032000 ssh               | docker-flags-032000       | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-032000                | docker-flags-032000       | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT | 27 Aug 24 15:21 PDT |
	| start   | -p cert-options-737000                | cert-options-737000       | jenkins | v1.33.1 | 27 Aug 24 15:21 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-737000 ssh               | cert-options-737000       | jenkins | v1.33.1 | 27 Aug 24 15:22 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-737000 -- sudo        | cert-options-737000       | jenkins | v1.33.1 | 27 Aug 24 15:22 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-737000                | cert-options-737000       | jenkins | v1.33.1 | 27 Aug 24 15:22 PDT | 27 Aug 24 15:22 PDT |
	| start   | -p running-upgrade-301000             | minikube                  | jenkins | v1.26.0 | 27 Aug 24 15:22 PDT | 27 Aug 24 15:23 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-301000             | running-upgrade-301000    | jenkins | v1.33.1 | 27 Aug 24 15:23 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-658000             | cert-expiration-658000    | jenkins | v1.33.1 | 27 Aug 24 15:25 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-658000             | cert-expiration-658000    | jenkins | v1.33.1 | 27 Aug 24 15:25 PDT | 27 Aug 24 15:25 PDT |
	| start   | -p kubernetes-upgrade-332000          | kubernetes-upgrade-332000 | jenkins | v1.33.1 | 27 Aug 24 15:25 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-332000          | kubernetes-upgrade-332000 | jenkins | v1.33.1 | 27 Aug 24 15:25 PDT | 27 Aug 24 15:25 PDT |
	| start   | -p kubernetes-upgrade-332000          | kubernetes-upgrade-332000 | jenkins | v1.33.1 | 27 Aug 24 15:25 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-332000          | kubernetes-upgrade-332000 | jenkins | v1.33.1 | 27 Aug 24 15:25 PDT | 27 Aug 24 15:25 PDT |
	| start   | -p stopped-upgrade-443000             | minikube                  | jenkins | v1.26.0 | 27 Aug 24 15:25 PDT | 27 Aug 24 15:26 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-443000 stop           | minikube                  | jenkins | v1.26.0 | 27 Aug 24 15:26 PDT | 27 Aug 24 15:26 PDT |
	| start   | -p stopped-upgrade-443000             | stopped-upgrade-443000    | jenkins | v1.33.1 | 27 Aug 24 15:26 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 15:26:19
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 15:26:19.418906    3939 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:26:19.419029    3939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:26:19.419032    3939 out.go:358] Setting ErrFile to fd 2...
	I0827 15:26:19.419034    3939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:26:19.419170    3939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:26:19.420383    3939 out.go:352] Setting JSON to false
	I0827 15:26:19.437850    3939 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3344,"bootTime":1724794235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:26:19.437923    3939 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:26:19.442529    3939 out.go:177] * [stopped-upgrade-443000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:26:19.450523    3939 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:26:19.450568    3939 notify.go:220] Checking for updates...
	I0827 15:26:19.460497    3939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:26:19.464580    3939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:26:19.468516    3939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:26:19.471606    3939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:26:19.474550    3939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:26:19.477800    3939 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:26:19.480520    3939 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0827 15:26:19.483578    3939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:26:19.486525    3939 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:26:19.493557    3939 start.go:297] selected driver: qemu2
	I0827 15:26:19.493563    3939 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:26:19.493616    3939 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:26:19.496256    3939 cni.go:84] Creating CNI manager for ""
	I0827 15:26:19.496279    3939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:26:19.496301    3939 start.go:340] cluster config:
	{Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:26:19.496353    3939 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:26:19.504560    3939 out.go:177] * Starting "stopped-upgrade-443000" primary control-plane node in "stopped-upgrade-443000" cluster
	I0827 15:26:19.508395    3939 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0827 15:26:19.508410    3939 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0827 15:26:19.508415    3939 cache.go:56] Caching tarball of preloaded images
	I0827 15:26:19.508471    3939 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:26:19.508477    3939 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0827 15:26:19.508524    3939 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0827 15:26:19.508990    3939 start.go:360] acquireMachinesLock for stopped-upgrade-443000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:26:19.509026    3939 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "stopped-upgrade-443000"
	I0827 15:26:19.509036    3939 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:26:19.509043    3939 fix.go:54] fixHost starting: 
	I0827 15:26:19.509147    3939 fix.go:112] recreateIfNeeded on stopped-upgrade-443000: state=Stopped err=<nil>
	W0827 15:26:19.509156    3939 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:26:19.517336    3939 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-443000" ...
	I0827 15:26:18.895537    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:19.521500    3939 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:26:19.521565    3939 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50458-:22,hostfwd=tcp::50459-:2376,hostname=stopped-upgrade-443000 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/disk.qcow2
	I0827 15:26:19.568051    3939 main.go:141] libmachine: STDOUT: 
	I0827 15:26:19.568083    3939 main.go:141] libmachine: STDERR: 
	I0827 15:26:19.568089    3939 main.go:141] libmachine: Waiting for VM to start (ssh -p 50458 docker@127.0.0.1)...
	I0827 15:26:23.898075    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:23.898321    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:23.924956    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:23.925080    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:23.952435    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:23.952516    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:23.964865    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:23.964935    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:23.976988    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:23.977058    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:23.988428    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:23.988500    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:23.999749    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:23.999834    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:24.010758    3801 logs.go:276] 0 containers: []
	W0827 15:26:24.010769    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:24.010827    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:24.026449    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:24.026467    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:24.026473    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:24.039696    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:24.039709    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:24.054743    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:24.054755    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:24.090142    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:24.090155    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:24.103822    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:24.103834    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:24.114649    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:24.114662    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:24.134113    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:24.134127    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:24.146178    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:24.146193    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:24.168790    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:24.168797    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:24.182945    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:24.182956    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:24.197596    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:24.197608    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:24.214757    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:24.214767    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:24.226134    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:24.226146    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:24.237022    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:24.237035    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:24.273837    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:24.273844    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:24.277859    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:24.277866    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:24.289547    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:24.289558    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:26.803308    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:31.805473    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:31.805581    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:31.817349    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:31.817419    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:31.828763    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:31.828833    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:31.840363    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:31.840433    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:31.852922    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:31.852996    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:31.864718    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:31.864785    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:31.876976    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:31.877048    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:31.890576    3801 logs.go:276] 0 containers: []
	W0827 15:26:31.890587    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:31.890643    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:31.903154    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:31.903176    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:31.903183    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:31.931468    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:31.931487    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:31.949674    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:31.949688    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:31.961197    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:31.961207    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:31.977199    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:31.977209    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:31.989195    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:31.989205    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:32.006655    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:32.006665    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:32.018165    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:32.018173    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:32.056561    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:32.056572    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:32.072087    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:32.072097    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:32.086128    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:32.086136    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:32.098007    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:32.098017    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:32.133440    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:32.133450    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:32.149369    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:32.149381    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:32.163707    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:32.163718    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:32.168390    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:32.168397    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:32.179798    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:32.179807    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:34.699161    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:39.631415    3939 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0827 15:26:39.631908    3939 machine.go:93] provisionDockerMachine start ...
	I0827 15:26:39.631992    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:39.632283    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:39.632294    3939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 15:26:39.710834    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0827 15:26:39.710863    3939 buildroot.go:166] provisioning hostname "stopped-upgrade-443000"
	I0827 15:26:39.710962    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:39.711135    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:39.711144    3939 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-443000 && echo "stopped-upgrade-443000" | sudo tee /etc/hostname
	I0827 15:26:39.781661    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-443000
	
	I0827 15:26:39.781718    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:39.781851    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:39.781861    3939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-443000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-443000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-443000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 15:26:39.848386    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 15:26:39.848400    3939 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19522-983/.minikube CaCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19522-983/.minikube}
	I0827 15:26:39.848413    3939 buildroot.go:174] setting up certificates
	I0827 15:26:39.848419    3939 provision.go:84] configureAuth start
	I0827 15:26:39.848426    3939 provision.go:143] copyHostCerts
	I0827 15:26:39.848508    3939 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem, removing ...
	I0827 15:26:39.848515    3939 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem
	I0827 15:26:39.849149    3939 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem (1078 bytes)
	I0827 15:26:39.849347    3939 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem, removing ...
	I0827 15:26:39.849351    3939 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem
	I0827 15:26:39.849408    3939 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem (1123 bytes)
	I0827 15:26:39.849520    3939 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem, removing ...
	I0827 15:26:39.849523    3939 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem
	I0827 15:26:39.849575    3939 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem (1675 bytes)
	I0827 15:26:39.849663    3939 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-443000 san=[127.0.0.1 localhost minikube stopped-upgrade-443000]
	I0827 15:26:39.966813    3939 provision.go:177] copyRemoteCerts
	I0827 15:26:39.966864    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 15:26:39.966874    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:26:40.003890    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 15:26:40.011503    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0827 15:26:40.019027    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 15:26:40.025850    3939 provision.go:87] duration metric: took 177.430875ms to configureAuth
	I0827 15:26:40.025862    3939 buildroot.go:189] setting minikube options for container-runtime
	I0827 15:26:40.025998    3939 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:26:40.026033    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.026121    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.026128    3939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0827 15:26:40.089686    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0827 15:26:40.089696    3939 buildroot.go:70] root file system type: tmpfs
	I0827 15:26:40.089762    3939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0827 15:26:40.089817    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.089948    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.089981    3939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0827 15:26:40.156708    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0827 15:26:40.156767    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.156895    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.156905    3939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0827 15:26:40.507042    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0827 15:26:40.507054    3939 machine.go:96] duration metric: took 875.165792ms to provisionDockerMachine
	I0827 15:26:40.507065    3939 start.go:293] postStartSetup for "stopped-upgrade-443000" (driver="qemu2")
	I0827 15:26:40.507072    3939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 15:26:40.507122    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 15:26:40.507131    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:26:40.539863    3939 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 15:26:40.541177    3939 info.go:137] Remote host: Buildroot 2021.02.12
	I0827 15:26:40.541185    3939 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19522-983/.minikube/addons for local assets ...
	I0827 15:26:40.541279    3939 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19522-983/.minikube/files for local assets ...
	I0827 15:26:40.541400    3939 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem -> 14632.pem in /etc/ssl/certs
	I0827 15:26:40.541530    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 15:26:40.544323    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem --> /etc/ssl/certs/14632.pem (1708 bytes)
	I0827 15:26:40.551521    3939 start.go:296] duration metric: took 44.452042ms for postStartSetup
	I0827 15:26:40.551534    3939 fix.go:56] duration metric: took 21.04318675s for fixHost
	I0827 15:26:40.551567    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.551676    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.551684    3939 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 15:26:40.616894    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797600.242501879
	
	I0827 15:26:40.616902    3939 fix.go:216] guest clock: 1724797600.242501879
	I0827 15:26:40.616906    3939 fix.go:229] Guest: 2024-08-27 15:26:40.242501879 -0700 PDT Remote: 2024-08-27 15:26:40.551536 -0700 PDT m=+21.152239501 (delta=-309.034121ms)
	I0827 15:26:40.616917    3939 fix.go:200] guest clock delta is within tolerance: -309.034121ms
	I0827 15:26:40.616919    3939 start.go:83] releasing machines lock for "stopped-upgrade-443000", held for 21.108583209s
	I0827 15:26:40.616984    3939 ssh_runner.go:195] Run: cat /version.json
	I0827 15:26:40.616994    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:26:40.616985    3939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 15:26:40.617030    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	W0827 15:26:40.617613    3939 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50458: connect: connection refused
	I0827 15:26:40.617634    3939 retry.go:31] will retry after 181.821517ms: dial tcp [::1]:50458: connect: connection refused
	W0827 15:26:40.649123    3939 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0827 15:26:40.649171    3939 ssh_runner.go:195] Run: systemctl --version
	I0827 15:26:40.650951    3939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 15:26:40.652482    3939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 15:26:40.652505    3939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0827 15:26:40.655480    3939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0827 15:26:40.659961    3939 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 15:26:40.659970    3939 start.go:495] detecting cgroup driver to use...
	I0827 15:26:40.660048    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 15:26:40.667095    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0827 15:26:40.670587    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 15:26:40.673520    3939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 15:26:40.673544    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 15:26:40.676453    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 15:26:40.680101    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 15:26:40.683662    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 15:26:40.687130    3939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 15:26:40.690067    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 15:26:40.692883    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0827 15:26:40.696157    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0827 15:26:40.699527    3939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 15:26:40.702327    3939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 15:26:40.704856    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:40.787582    3939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 15:26:40.798573    3939 start.go:495] detecting cgroup driver to use...
	I0827 15:26:40.798650    3939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0827 15:26:40.807557    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 15:26:40.812123    3939 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 15:26:40.823262    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 15:26:40.827709    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 15:26:40.832784    3939 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0827 15:26:40.882518    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 15:26:40.888170    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 15:26:40.893857    3939 ssh_runner.go:195] Run: which cri-dockerd
	I0827 15:26:40.894974    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0827 15:26:40.897347    3939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0827 15:26:40.902078    3939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0827 15:26:40.979468    3939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0827 15:26:41.059983    3939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0827 15:26:41.060052    3939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0827 15:26:41.065700    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:41.145939    3939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 15:26:42.306544    3939 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160626667s)
	I0827 15:26:42.306619    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0827 15:26:42.311919    3939 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0827 15:26:42.318525    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 15:26:42.323160    3939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0827 15:26:42.397751    3939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0827 15:26:42.479555    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:42.553896    3939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0827 15:26:42.559760    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 15:26:42.564684    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:42.643463    3939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0827 15:26:42.681846    3939 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0827 15:26:42.681924    3939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0827 15:26:42.684870    3939 start.go:563] Will wait 60s for crictl version
	I0827 15:26:42.684929    3939 ssh_runner.go:195] Run: which crictl
	I0827 15:26:42.686811    3939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 15:26:42.704959    3939 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0827 15:26:42.705028    3939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 15:26:42.721202    3939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 15:26:39.701576    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:39.701725    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:39.718890    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:39.718967    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:39.732071    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:39.732145    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:39.742977    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:39.743041    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:39.757971    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:39.758044    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:39.776706    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:39.776782    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:39.787658    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:39.787724    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:39.797768    3801 logs.go:276] 0 containers: []
	W0827 15:26:39.797778    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:39.797833    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:39.808025    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:39.808042    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:39.808047    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:39.820753    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:39.820763    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:39.825090    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:39.825097    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:39.839325    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:39.839339    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:39.855198    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:39.855206    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:39.871160    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:39.871172    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:39.883406    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:39.883417    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:39.897883    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:39.897895    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:39.938803    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:39.938813    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:39.950586    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:39.950597    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:39.970482    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:39.970493    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:39.985509    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:39.985523    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:40.009719    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:40.009736    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:40.046210    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:40.046223    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:40.061224    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:40.061234    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:40.073723    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:40.073734    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:40.091997    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:40.092006    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:42.607867    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:42.741691    3939 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0827 15:26:42.741820    3939 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0827 15:26:42.743125    3939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 15:26:42.747061    3939 kubeadm.go:883] updating cluster {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0827 15:26:42.747104    3939 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0827 15:26:42.747142    3939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 15:26:42.757588    3939 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 15:26:42.757612    3939 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0827 15:26:42.757656    3939 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0827 15:26:42.760588    3939 ssh_runner.go:195] Run: which lz4
	I0827 15:26:42.761910    3939 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 15:26:42.763122    3939 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 15:26:42.763132    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0827 15:26:43.746203    3939 docker.go:649] duration metric: took 984.3615ms to copy over tarball
	I0827 15:26:43.746264    3939 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 15:26:47.609892    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:47.610022    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:47.621610    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:47.621698    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:47.633412    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:47.633486    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:47.656470    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:47.656546    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:47.671999    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:47.672074    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:47.687397    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:47.687468    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:47.702630    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:47.702702    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:47.713918    3801 logs.go:276] 0 containers: []
	W0827 15:26:47.713931    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:47.713993    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:47.726221    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:47.726239    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:47.726244    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:47.741463    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:47.741472    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:47.753642    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:47.753655    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:47.770918    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:47.770933    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:47.811898    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:47.811925    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:47.817703    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:47.817713    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:47.859513    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:47.859525    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:47.880513    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:47.880530    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:47.892658    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:47.892672    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:47.909266    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:47.909285    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:47.922718    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:47.922731    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:47.950080    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:47.950089    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:47.968245    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:47.968258    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:47.990457    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:47.990467    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:48.004813    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:48.004826    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:48.023435    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:48.023450    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:48.035428    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:48.035444    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:44.912968    3939 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.166727875s)
	I0827 15:26:44.912981    3939 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 15:26:44.928468    3939 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0827 15:26:44.931979    3939 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0827 15:26:44.937277    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:45.015173    3939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 15:26:46.604960    3939 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.589822541s)
	I0827 15:26:46.605058    3939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 15:26:46.618982    3939 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 15:26:46.618991    3939 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0827 15:26:46.618996    3939 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0827 15:26:46.624516    3939 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:46.627126    3939 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:46.628705    3939 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:46.629018    3939 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:46.630833    3939 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:46.630858    3939 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:46.632165    3939 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:46.632201    3939 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:46.634319    3939 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:46.634354    3939 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:46.634373    3939 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:46.635540    3939 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0827 15:26:46.636314    3939 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:46.636348    3939 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:46.638539    3939 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:46.638569    3939 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W0827 15:26:47.367345    3939 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0827 15:26:47.367749    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:47.396367    3939 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0827 15:26:47.396425    3939 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:47.396524    3939 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:47.417997    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0827 15:26:47.418143    3939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0827 15:26:47.419994    3939 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0827 15:26:47.420007    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0827 15:26:47.451399    3939 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0827 15:26:47.451413    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0827 15:26:47.561447    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:47.602846    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:47.610770    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:47.622958    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:47.706165    3939 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0827 15:26:47.706225    3939 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0827 15:26:47.706245    3939 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:47.706247    3939 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0827 15:26:47.706259    3939 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:47.706297    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:47.706297    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:47.706344    3939 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0827 15:26:47.706386    3939 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:47.706404    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:47.706358    3939 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0827 15:26:47.706457    3939 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:47.706475    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:47.723932    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0827 15:26:47.728469    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0827 15:26:47.736244    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0827 15:26:47.736259    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0827 15:26:47.776591    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:47.780457    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0827 15:26:47.787921    3939 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0827 15:26:47.788046    3939 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0827 15:26:47.788065    3939 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:47.788087    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:47.788049    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:47.794991    3939 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0827 15:26:47.795014    3939 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0827 15:26:47.795069    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0827 15:26:47.800992    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0827 15:26:47.802664    3939 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0827 15:26:47.802688    3939 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:47.802744    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:47.816033    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0827 15:26:47.816072    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0827 15:26:47.816163    3939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0827 15:26:47.816169    3939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0827 15:26:47.817877    3939 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0827 15:26:47.817895    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0827 15:26:47.818142    3939 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0827 15:26:47.818154    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0827 15:26:47.840181    3939 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0827 15:26:47.840196    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0827 15:26:47.883889    3939 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0827 15:26:47.883917    3939 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0827 15:26:47.883924    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0827 15:26:47.920412    3939 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0827 15:26:47.920457    3939 cache_images.go:92] duration metric: took 1.301497875s to LoadCachedImages
	W0827 15:26:47.920510    3939 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0827 15:26:47.920518    3939 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0827 15:26:47.920586    3939 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-443000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 15:26:47.920694    3939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0827 15:26:47.935513    3939 cni.go:84] Creating CNI manager for ""
	I0827 15:26:47.935526    3939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:26:47.935531    3939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 15:26:47.935539    3939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-443000 NodeName:stopped-upgrade-443000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 15:26:47.935612    3939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-443000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 15:26:47.935681    3939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0827 15:26:47.939387    3939 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 15:26:47.939438    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 15:26:47.942367    3939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0827 15:26:47.947557    3939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 15:26:47.952852    3939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0827 15:26:47.958596    3939 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0827 15:26:47.960051    3939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 15:26:47.964618    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:48.041647    3939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:26:48.047368    3939 certs.go:68] Setting up /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000 for IP: 10.0.2.15
	I0827 15:26:48.047381    3939 certs.go:194] generating shared ca certs ...
	I0827 15:26:48.047390    3939 certs.go:226] acquiring lock for ca certs: {Name:mkc3f4287026c100ff774c65b8333a833cfe8f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.047568    3939 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19522-983/.minikube/ca.key
	I0827 15:26:48.047625    3939 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.key
	I0827 15:26:48.047633    3939 certs.go:256] generating profile certs ...
	I0827 15:26:48.047717    3939 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.key
	I0827 15:26:48.047738    3939 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4
	I0827 15:26:48.047751    3939 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0827 15:26:48.155771    3939 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 ...
	I0827 15:26:48.155786    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4: {Name:mk9e9e95b75e538296521b4b4d1d83521f1d6e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.156105    3939 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 ...
	I0827 15:26:48.156110    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4: {Name:mk0380fe0088fdd2112c3f42dffcefaab127de8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.156246    3939 certs.go:381] copying /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt
	I0827 15:26:48.156923    3939 certs.go:385] copying /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key
	I0827 15:26:48.157095    3939 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/proxy-client.key
	I0827 15:26:48.157236    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463.pem (1338 bytes)
	W0827 15:26:48.157266    3939 certs.go:480] ignoring /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463_empty.pem, impossibly tiny 0 bytes
	I0827 15:26:48.157272    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem (1679 bytes)
	I0827 15:26:48.157295    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem (1078 bytes)
	I0827 15:26:48.157314    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem (1123 bytes)
	I0827 15:26:48.157332    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem (1675 bytes)
	I0827 15:26:48.157374    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem (1708 bytes)
	I0827 15:26:48.157714    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 15:26:48.164561    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0827 15:26:48.171981    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 15:26:48.179345    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0827 15:26:48.186648    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0827 15:26:48.193505    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 15:26:48.200227    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 15:26:48.207493    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 15:26:48.214882    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463.pem --> /usr/share/ca-certificates/1463.pem (1338 bytes)
	I0827 15:26:48.221671    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem --> /usr/share/ca-certificates/14632.pem (1708 bytes)
	I0827 15:26:48.228230    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 15:26:48.235294    3939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 15:26:48.240291    3939 ssh_runner.go:195] Run: openssl version
	I0827 15:26:48.242152    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 15:26:48.244887    3939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:26:48.246414    3939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:26:48.246436    3939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:26:48.248081    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 15:26:48.251245    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463.pem && ln -fs /usr/share/ca-certificates/1463.pem /etc/ssl/certs/1463.pem"
	I0827 15:26:48.254217    3939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463.pem
	I0827 15:26:48.255509    3939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 21:43 /usr/share/ca-certificates/1463.pem
	I0827 15:26:48.255526    3939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463.pem
	I0827 15:26:48.257247    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1463.pem /etc/ssl/certs/51391683.0"
	I0827 15:26:48.260319    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14632.pem && ln -fs /usr/share/ca-certificates/14632.pem /etc/ssl/certs/14632.pem"
	I0827 15:26:48.264064    3939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14632.pem
	I0827 15:26:48.265531    3939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 21:43 /usr/share/ca-certificates/14632.pem
	I0827 15:26:48.265549    3939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14632.pem
	I0827 15:26:48.267192    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14632.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 15:26:48.270365    3939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 15:26:48.271736    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 15:26:48.273697    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 15:26:48.275352    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 15:26:48.277328    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 15:26:48.279045    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 15:26:48.280842    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 15:26:48.282615    3939 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:26:48.282679    3939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 15:26:48.292660    3939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 15:26:48.295839    3939 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0827 15:26:48.295845    3939 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0827 15:26:48.295864    3939 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0827 15:26:48.299031    3939 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:26:48.299331    3939 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-443000" does not appear in /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:26:48.299426    3939 kubeconfig.go:62] /Users/jenkins/minikube-integration/19522-983/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-443000" cluster setting kubeconfig missing "stopped-upgrade-443000" context setting]
	I0827 15:26:48.299662    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.300205    3939 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fdbeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:26:48.300536    3939 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0827 15:26:48.303277    3939 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-443000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0827 15:26:48.303283    3939 kubeadm.go:1160] stopping kube-system containers ...
	I0827 15:26:48.303319    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 15:26:48.313900    3939 docker.go:483] Stopping containers: [165d46598547 f4d3cadbd368 9cd919fac506 2185b7616386 a9f742447589 585e47bfe28a cb4c8257b0f2 69c30e03f3a6]
	I0827 15:26:48.313969    3939 ssh_runner.go:195] Run: docker stop 165d46598547 f4d3cadbd368 9cd919fac506 2185b7616386 a9f742447589 585e47bfe28a cb4c8257b0f2 69c30e03f3a6
	I0827 15:26:48.324734    3939 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0827 15:26:48.330541    3939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:26:48.333336    3939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 15:26:48.333343    3939 kubeadm.go:157] found existing configuration files:
	
	I0827 15:26:48.333369    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf
	I0827 15:26:48.336045    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 15:26:48.336072    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:26:48.339153    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf
	I0827 15:26:48.341943    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 15:26:48.341968    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:26:48.344472    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf
	I0827 15:26:48.347452    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 15:26:48.347472    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:26:48.350278    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf
	I0827 15:26:48.352621    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 15:26:48.352640    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:26:48.355802    3939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:26:48.358921    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.382025    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.847475    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.978724    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.998860    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:49.027264    3939 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:26:49.027352    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:26:50.555303    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:49.529502    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:26:50.029390    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:26:50.033570    3939 api_server.go:72] duration metric: took 1.006341375s to wait for apiserver process to appear ...
	I0827 15:26:50.033581    3939 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:26:50.033591    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:55.557331    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:55.557428    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:26:55.568341    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:26:55.568415    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:26:55.579383    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:26:55.579450    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:26:55.592023    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:26:55.592089    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:26:55.602117    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:26:55.602190    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:26:55.613309    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:26:55.613377    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:26:55.625130    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:26:55.625194    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:26:55.635264    3801 logs.go:276] 0 containers: []
	W0827 15:26:55.635275    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:26:55.635336    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:26:55.646152    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:26:55.646168    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:26:55.646174    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:26:55.650408    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:26:55.650417    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:26:55.684600    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:26:55.684610    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:26:55.698661    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:26:55.698670    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:26:55.711820    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:26:55.711830    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:26:55.726306    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:26:55.726319    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:26:55.744290    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:26:55.744299    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:26:55.758013    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:26:55.758025    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:26:55.772199    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:26:55.772209    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:26:55.783558    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:26:55.783570    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:26:55.807397    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:26:55.807404    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:26:55.818886    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:26:55.818896    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:26:55.857323    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:26:55.857331    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:26:55.869787    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:26:55.869797    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:26:55.880951    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:26:55.880962    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:26:55.896325    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:26:55.896335    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:26:55.908028    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:26:55.908038    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:26:55.035613    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:55.035640    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:58.421507    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:00.035764    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:00.035818    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:03.423621    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:03.423771    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:03.435268    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:27:03.435348    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:03.448089    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:27:03.448164    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:03.458923    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:27:03.458994    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:03.469010    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:27:03.469077    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:03.480008    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:27:03.480070    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:03.490817    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:27:03.490885    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:03.500770    3801 logs.go:276] 0 containers: []
	W0827 15:27:03.500782    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:03.500839    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:03.511780    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:27:03.511797    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:27:03.511803    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:27:03.527723    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:27:03.527735    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:03.539943    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:27:03.539954    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:27:03.554731    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:03.554741    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:03.588425    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:27:03.588435    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:27:03.601406    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:27:03.601416    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:27:03.618931    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:03.618940    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:03.657577    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:27:03.657586    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:27:03.677696    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:27:03.677707    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:27:03.692246    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:27:03.692257    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:27:03.703887    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:27:03.703898    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:27:03.717832    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:27:03.717846    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:27:03.733812    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:27:03.733825    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:27:03.745909    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:27:03.745919    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:27:03.757655    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:27:03.757666    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:27:03.769534    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:03.769546    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:03.792205    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:03.792214    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:06.298237    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:05.036072    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:05.036137    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:11.300306    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:11.300389    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:11.311131    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:27:11.311208    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:11.328069    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:27:11.328146    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:11.338309    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:27:11.338375    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:11.349662    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:27:11.349736    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:11.360043    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:27:11.360117    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:11.370104    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:27:11.370173    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:11.380104    3801 logs.go:276] 0 containers: []
	W0827 15:27:11.380114    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:11.380176    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:11.390327    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:27:11.390344    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:27:11.390349    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:27:11.404315    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:27:11.404325    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:27:11.417418    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:27:11.417430    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:27:11.432965    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:27:11.432974    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:27:11.445280    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:27:11.445290    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:27:11.456722    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:11.456733    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:11.480472    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:11.480479    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:11.485011    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:11.485020    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:11.520238    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:27:11.520249    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:27:11.533683    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:27:11.533697    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:27:11.548000    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:11.548010    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:11.586749    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:27:11.586758    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:27:11.603386    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:27:11.603396    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:27:11.619353    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:27:11.619362    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:27:11.631580    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:27:11.631591    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:27:11.645362    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:27:11.645373    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:27:11.664024    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:27:11.664034    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:10.036550    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:10.036607    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:14.178021    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:15.037237    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:15.037300    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:19.180055    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:19.180164    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:19.192602    3801 logs.go:276] 2 containers: [d03a317dde88 db8bdf21a995]
	I0827 15:27:19.192676    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:19.205243    3801 logs.go:276] 2 containers: [da54a26348a1 04b0058ea0e2]
	I0827 15:27:19.205314    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:19.218652    3801 logs.go:276] 1 containers: [a58bface4234]
	I0827 15:27:19.218724    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:19.229765    3801 logs.go:276] 2 containers: [d120a6c3258b 8755897fc0dd]
	I0827 15:27:19.229835    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:19.240177    3801 logs.go:276] 1 containers: [ec60ed04331e]
	I0827 15:27:19.240251    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:19.251147    3801 logs.go:276] 2 containers: [cba65d0c1557 e1ffb58c1505]
	I0827 15:27:19.251217    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:19.263811    3801 logs.go:276] 0 containers: []
	W0827 15:27:19.263823    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:19.263885    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:19.274446    3801 logs.go:276] 2 containers: [f5d6a90b238a b7d71e5477c1]
	I0827 15:27:19.274462    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:19.274467    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:19.298578    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:19.298588    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:19.341799    3801 logs.go:123] Gathering logs for etcd [04b0058ea0e2] ...
	I0827 15:27:19.341813    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04b0058ea0e2"
	I0827 15:27:19.356487    3801 logs.go:123] Gathering logs for storage-provisioner [f5d6a90b238a] ...
	I0827 15:27:19.356500    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5d6a90b238a"
	I0827 15:27:19.368215    3801 logs.go:123] Gathering logs for kube-controller-manager [e1ffb58c1505] ...
	I0827 15:27:19.368228    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1ffb58c1505"
	I0827 15:27:19.380820    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:19.380834    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:19.418651    3801 logs.go:123] Gathering logs for etcd [da54a26348a1] ...
	I0827 15:27:19.418663    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da54a26348a1"
	I0827 15:27:19.433110    3801 logs.go:123] Gathering logs for kube-scheduler [d120a6c3258b] ...
	I0827 15:27:19.433122    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d120a6c3258b"
	I0827 15:27:19.447124    3801 logs.go:123] Gathering logs for kube-scheduler [8755897fc0dd] ...
	I0827 15:27:19.447137    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8755897fc0dd"
	I0827 15:27:19.465875    3801 logs.go:123] Gathering logs for kube-apiserver [d03a317dde88] ...
	I0827 15:27:19.465886    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03a317dde88"
	I0827 15:27:19.480000    3801 logs.go:123] Gathering logs for kube-apiserver [db8bdf21a995] ...
	I0827 15:27:19.480009    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8bdf21a995"
	I0827 15:27:19.492376    3801 logs.go:123] Gathering logs for coredns [a58bface4234] ...
	I0827 15:27:19.492386    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a58bface4234"
	I0827 15:27:19.504268    3801 logs.go:123] Gathering logs for storage-provisioner [b7d71e5477c1] ...
	I0827 15:27:19.504281    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d71e5477c1"
	I0827 15:27:19.516059    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:27:19.516073    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:19.534214    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:19.534225    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:19.539031    3801 logs.go:123] Gathering logs for kube-proxy [ec60ed04331e] ...
	I0827 15:27:19.539038    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec60ed04331e"
	I0827 15:27:19.550596    3801 logs.go:123] Gathering logs for kube-controller-manager [cba65d0c1557] ...
	I0827 15:27:19.550606    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba65d0c1557"
	I0827 15:27:22.069600    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:20.038306    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:20.038347    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:27.071733    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:27.071852    3801 kubeadm.go:597] duration metric: took 4m4.305828458s to restartPrimaryControlPlane
	W0827 15:27:27.071953    3801 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0827 15:27:27.071999    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0827 15:27:28.067346    3801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 15:27:28.072646    3801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:27:28.075309    3801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:27:28.077921    3801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 15:27:28.077926    3801 kubeadm.go:157] found existing configuration files:
	
	I0827 15:27:28.077950    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/admin.conf
	I0827 15:27:28.080928    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 15:27:28.080950    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:27:28.083460    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/kubelet.conf
	I0827 15:27:28.086243    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 15:27:28.086269    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:27:28.089242    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/controller-manager.conf
	I0827 15:27:28.091735    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 15:27:28.091759    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:27:28.094380    3801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/scheduler.conf
	I0827 15:27:28.097264    3801 kubeadm.go:163] "https://control-plane.minikube.internal:50266" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50266 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 15:27:28.097285    3801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:27:28.099582    3801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 15:27:28.116487    3801 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0827 15:27:28.116579    3801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 15:27:28.167178    3801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 15:27:28.167279    3801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 15:27:28.167342    3801 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 15:27:28.218833    3801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 15:27:28.223045    3801 out.go:235]   - Generating certificates and keys ...
	I0827 15:27:28.223081    3801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 15:27:28.223115    3801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 15:27:28.223167    3801 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 15:27:28.223201    3801 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 15:27:28.223234    3801 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 15:27:28.223259    3801 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 15:27:28.223294    3801 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 15:27:28.223326    3801 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 15:27:28.223368    3801 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 15:27:28.223405    3801 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 15:27:28.223424    3801 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 15:27:28.223453    3801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 15:27:28.301182    3801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 15:27:28.513891    3801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 15:27:28.567596    3801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 15:27:28.744152    3801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 15:27:28.773390    3801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 15:27:28.773746    3801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 15:27:28.773769    3801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 15:27:28.847916    3801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 15:27:25.039404    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:25.039445    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:28.851497    3801 out.go:235]   - Booting up control plane ...
	I0827 15:27:28.851544    3801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 15:27:28.851583    3801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 15:27:28.851614    3801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 15:27:28.856069    3801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 15:27:28.857006    3801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 15:27:33.358352    3801 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501382 seconds
	I0827 15:27:33.358467    3801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 15:27:33.363442    3801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 15:27:33.882935    3801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 15:27:33.883248    3801 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-301000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 15:27:34.386421    3801 kubeadm.go:310] [bootstrap-token] Using token: eq0u6u.znq3ywqbbt29bia7
	I0827 15:27:30.040801    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:30.040828    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:34.389828    3801 out.go:235]   - Configuring RBAC rules ...
	I0827 15:27:34.389890    3801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 15:27:34.389942    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 15:27:34.397414    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 15:27:34.398115    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 15:27:34.399078    3801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 15:27:34.399973    3801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 15:27:34.403206    3801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 15:27:34.569523    3801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 15:27:34.792101    3801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 15:27:34.793047    3801 kubeadm.go:310] 
	I0827 15:27:34.793084    3801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 15:27:34.793088    3801 kubeadm.go:310] 
	I0827 15:27:34.793124    3801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 15:27:34.793127    3801 kubeadm.go:310] 
	I0827 15:27:34.793139    3801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 15:27:34.793168    3801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 15:27:34.793195    3801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 15:27:34.793198    3801 kubeadm.go:310] 
	I0827 15:27:34.793224    3801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 15:27:34.793227    3801 kubeadm.go:310] 
	I0827 15:27:34.793249    3801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 15:27:34.793251    3801 kubeadm.go:310] 
	I0827 15:27:34.793279    3801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 15:27:34.793316    3801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 15:27:34.793446    3801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 15:27:34.793450    3801 kubeadm.go:310] 
	I0827 15:27:34.793490    3801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 15:27:34.793535    3801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 15:27:34.793544    3801 kubeadm.go:310] 
	I0827 15:27:34.793587    3801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eq0u6u.znq3ywqbbt29bia7 \
	I0827 15:27:34.793641    3801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 \
	I0827 15:27:34.793655    3801 kubeadm.go:310] 	--control-plane 
	I0827 15:27:34.793657    3801 kubeadm.go:310] 
	I0827 15:27:34.793697    3801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 15:27:34.793700    3801 kubeadm.go:310] 
	I0827 15:27:34.793768    3801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eq0u6u.znq3ywqbbt29bia7 \
	I0827 15:27:34.793824    3801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 
	I0827 15:27:34.793894    3801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 15:27:34.793902    3801 cni.go:84] Creating CNI manager for ""
	I0827 15:27:34.793910    3801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:27:34.800088    3801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 15:27:34.807218    3801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 15:27:34.810578    3801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 15:27:34.815601    3801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 15:27:34.815689    3801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 15:27:34.815693    3801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-301000 minikube.k8s.io/updated_at=2024_08_27T15_27_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=running-upgrade-301000 minikube.k8s.io/primary=true
	I0827 15:27:34.861558    3801 ops.go:34] apiserver oom_adj: -16
	I0827 15:27:34.861569    3801 kubeadm.go:1113] duration metric: took 45.930542ms to wait for elevateKubeSystemPrivileges
	I0827 15:27:34.861580    3801 kubeadm.go:394] duration metric: took 4m12.110122208s to StartCluster
	I0827 15:27:34.861591    3801 settings.go:142] acquiring lock: {Name:mk8039639095abb20902a2ce8e0a004770b18340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:27:34.861678    3801 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:27:34.862044    3801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:27:34.862267    3801 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:27:34.862286    3801 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 15:27:34.862336    3801 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-301000"
	I0827 15:27:34.862346    3801 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-301000"
	W0827 15:27:34.862350    3801 addons.go:243] addon storage-provisioner should already be in state true
	I0827 15:27:34.862352    3801 config.go:182] Loaded profile config "running-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:27:34.862362    3801 host.go:66] Checking if "running-upgrade-301000" exists ...
	I0827 15:27:34.862382    3801 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-301000"
	I0827 15:27:34.862409    3801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-301000"
	I0827 15:27:34.863260    3801 kapi.go:59] client config for running-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/running-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027b7eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:27:34.863385    3801 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-301000"
	W0827 15:27:34.863389    3801 addons.go:243] addon default-storageclass should already be in state true
	I0827 15:27:34.863397    3801 host.go:66] Checking if "running-upgrade-301000" exists ...
	I0827 15:27:34.866169    3801 out.go:177] * Verifying Kubernetes components...
	I0827 15:27:34.866459    3801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 15:27:34.869443    3801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 15:27:34.869449    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:27:34.873109    3801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:27:34.877093    3801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:27:34.880145    3801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:27:34.880151    3801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 15:27:34.880157    3801 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/running-upgrade-301000/id_rsa Username:docker}
	I0827 15:27:34.958088    3801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:27:34.963752    3801 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:27:34.963801    3801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:27:34.967913    3801 api_server.go:72] duration metric: took 105.638417ms to wait for apiserver process to appear ...
	I0827 15:27:34.967922    3801 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:27:34.967930    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:34.984132    3801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 15:27:35.001186    3801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:27:35.317727    3801 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 15:27:35.317741    3801 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 15:27:35.042186    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:35.042211    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:39.969935    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:39.969990    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:40.044205    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:40.044234    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:44.970268    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:44.970295    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:45.046255    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:45.046273    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:49.970493    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:49.970521    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:50.048250    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:50.048398    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:50.059300    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:27:50.059374    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:50.070162    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:27:50.070232    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:50.080812    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:27:50.080882    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:50.090998    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:27:50.091074    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:50.101614    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:27:50.101686    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:50.111915    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:27:50.111987    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:50.122576    3939 logs.go:276] 0 containers: []
	W0827 15:27:50.122587    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:50.122641    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:50.133075    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:27:50.133098    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:27:50.133104    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:27:50.150350    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:50.150359    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:50.188512    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:50.188535    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:50.270443    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:27:50.270458    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:27:50.282113    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:27:50.282124    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:27:50.295536    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:27:50.295548    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:27:50.307366    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:27:50.307380    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:27:50.319099    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:50.319111    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:50.323112    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:27:50.323121    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:27:50.363425    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:27:50.363436    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:27:50.377545    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:27:50.377555    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:27:50.394530    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:50.394545    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:50.419945    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:27:50.419956    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:50.431596    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:27:50.431610    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:27:50.452097    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:27:50.452110    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:27:50.463409    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:27:50.463424    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:27:52.975527    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:54.970821    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:54.970876    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:57.976675    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:57.976910    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:57.996055    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:27:57.996137    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:58.010527    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:27:58.010611    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:58.022381    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:27:58.022460    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:58.033015    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:27:58.033084    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:58.043370    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:27:58.043437    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:58.053583    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:27:58.053649    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:58.064032    3939 logs.go:276] 0 containers: []
	W0827 15:27:58.064044    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:58.064102    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:58.074836    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:27:58.074854    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:27:58.074861    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:27:58.086308    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:27:58.086319    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:27:58.098549    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:27:58.098561    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:27:58.111071    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:27:58.111082    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:27:58.129005    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:27:58.129016    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:58.140780    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:58.140791    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:58.145619    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:27:58.145625    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:27:58.184301    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:27:58.184312    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:27:58.195220    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:27:58.195230    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:27:58.209246    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:27:58.209256    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:27:58.221840    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:27:58.221851    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:27:58.233684    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:58.233695    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:58.270707    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:27:58.270716    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:27:58.284846    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:27:58.284857    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:27:58.298548    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:58.298560    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:58.324381    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:58.324392    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:59.971378    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:59.971435    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:00.860788    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:04.972106    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:04.972135    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0827 15:28:05.318104    3801 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0827 15:28:05.323353    3801 out.go:177] * Enabled addons: storage-provisioner
	I0827 15:28:05.331281    3801 addons.go:510] duration metric: took 30.47000425s for enable addons: enabled=[storage-provisioner]
	I0827 15:28:05.862912    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:05.863123    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:05.883363    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:05.883454    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:05.896772    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:05.896847    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:05.911511    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:05.911579    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:05.921941    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:05.922021    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:05.932558    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:05.932622    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:05.942967    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:05.943027    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:05.952884    3939 logs.go:276] 0 containers: []
	W0827 15:28:05.952896    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:05.952957    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:05.963660    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:05.963676    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:05.963682    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:05.978351    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:05.978364    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:05.989706    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:05.989717    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:06.003745    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:06.003758    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:06.015379    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:06.015390    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:06.027310    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:06.027324    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:06.032315    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:06.032324    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:06.068110    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:06.068125    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:06.080253    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:06.080264    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:06.102573    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:06.102586    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:06.116663    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:06.116673    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:06.128661    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:06.128672    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:06.140353    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:06.140364    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:06.177502    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:06.177510    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:06.215775    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:06.215787    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:06.229735    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:06.229749    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:08.756934    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:09.973070    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:09.973143    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:13.759019    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:13.759204    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:13.775251    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:13.775338    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:13.789049    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:13.789121    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:13.800000    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:13.800071    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:13.810302    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:13.810367    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:13.820395    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:13.820470    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:13.830939    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:13.831005    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:13.841297    3939 logs.go:276] 0 containers: []
	W0827 15:28:13.841308    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:13.841366    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:13.852101    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:13.852120    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:13.852125    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:13.856341    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:13.856348    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:13.867988    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:13.867997    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:13.893015    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:13.893025    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:13.932076    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:13.932084    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:13.946067    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:13.946077    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:13.959818    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:13.959829    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:13.971048    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:13.971058    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:13.990989    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:13.990998    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:14.005386    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:14.005399    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:14.023284    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:14.023298    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:14.035956    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:14.035965    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:14.076429    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:14.076440    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:14.114740    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:14.114752    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:14.126050    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:14.126061    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:14.138047    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:14.138058    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:14.974619    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:14.974665    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:16.651990    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:19.975764    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:19.975818    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:21.652701    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:21.653117    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:21.699880    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:21.699982    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:21.716084    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:21.716186    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:21.729065    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:21.729137    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:21.739984    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:21.740059    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:21.751742    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:21.751815    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:21.762971    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:21.763042    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:21.773960    3939 logs.go:276] 0 containers: []
	W0827 15:28:21.773971    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:21.774029    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:21.788335    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:21.788352    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:21.788358    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:21.800673    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:21.800687    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:21.813648    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:21.813659    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:21.828415    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:21.828424    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:21.840643    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:21.840657    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:21.852898    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:21.852909    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:21.877467    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:21.877480    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:21.900891    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:21.900899    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:21.912822    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:21.912835    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:21.931623    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:21.931633    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:21.970185    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:21.970195    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:22.004565    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:22.004580    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:22.042800    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:22.042810    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:22.060411    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:22.060421    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:22.065065    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:22.065075    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:22.083046    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:22.083056    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:24.977941    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:24.977989    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:24.597103    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:29.978874    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:29.978896    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:29.599300    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:29.599654    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:29.628173    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:29.628310    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:29.646563    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:29.646653    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:29.660519    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:29.660590    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:29.672903    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:29.672976    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:29.686555    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:29.686621    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:29.697313    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:29.697385    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:29.707199    3939 logs.go:276] 0 containers: []
	W0827 15:28:29.707209    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:29.707274    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:29.717604    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:29.717621    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:29.717626    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:29.721963    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:29.721969    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:29.760265    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:29.760283    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:29.772990    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:29.773005    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:29.788504    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:29.788515    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:29.827097    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:29.827105    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:29.841341    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:29.841351    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:29.853351    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:29.853359    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:29.877920    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:29.877933    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:29.889289    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:29.889299    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:29.907650    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:29.907661    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:29.919395    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:29.919407    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:29.954451    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:29.954462    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:29.968298    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:29.968313    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:29.982855    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:29.982865    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:29.994963    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:29.994975    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:32.509289    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:34.980096    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:34.980254    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:34.993005    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:34.993082    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:35.008829    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:35.008898    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:35.019755    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:35.019834    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:35.035032    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:35.035100    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:35.046205    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:35.046279    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:35.057059    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:35.057131    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:35.071398    3801 logs.go:276] 0 containers: []
	W0827 15:28:35.071409    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:35.071470    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:35.086240    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:35.086254    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:35.086260    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:35.101285    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:35.101297    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:35.115061    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:35.115074    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:35.126715    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:35.126727    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:28:35.145040    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:35.145051    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:35.164670    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:35.164684    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:35.198772    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:35.198785    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:35.202989    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:35.202995    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:35.237172    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:35.237185    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:35.249251    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:35.249265    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:35.261325    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:35.261338    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:35.275955    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:35.275966    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:35.300230    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:35.300237    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:37.812072    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:37.511800    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:37.512237    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:37.544754    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:37.544893    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:37.565665    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:37.565765    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:37.580528    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:37.580599    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:37.592687    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:37.592760    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:37.603706    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:37.603775    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:37.614523    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:37.614588    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:37.629318    3939 logs.go:276] 0 containers: []
	W0827 15:28:37.629330    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:37.629389    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:37.640077    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:37.640097    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:37.640102    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:37.656514    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:37.656525    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:37.675684    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:37.675695    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:37.688671    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:37.688681    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:37.700576    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:37.700590    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:37.721055    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:37.721069    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:37.733858    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:37.733872    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:37.745476    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:37.745489    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:37.749533    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:37.749539    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:37.761126    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:37.761136    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:37.785618    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:37.785629    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:37.824849    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:37.824871    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:37.862643    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:37.862657    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:37.902105    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:37.902117    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:37.917078    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:37.917089    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:37.929415    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:37.929426    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:42.812978    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:42.813234    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:42.837160    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:42.837258    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:42.853448    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:42.853544    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:42.866454    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:42.866528    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:42.878193    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:42.878261    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:42.891402    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:42.891472    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:42.902279    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:42.902350    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:42.912995    3801 logs.go:276] 0 containers: []
	W0827 15:28:42.913006    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:42.913067    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:42.923341    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:42.923357    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:42.923364    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:42.937885    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:42.937898    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:42.952605    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:42.952618    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:42.964236    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:42.964250    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:42.976078    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:42.976088    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:42.987066    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:42.987079    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:43.011220    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:43.011238    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:43.044470    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:43.044479    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:43.081526    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:43.081537    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:43.093496    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:43.093506    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:43.109471    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:43.109482    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:28:43.128683    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:43.128693    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:43.141456    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:43.141467    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:40.442957    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:45.646140    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:45.445525    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:45.445805    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:45.477859    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:45.477981    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:45.499381    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:45.499484    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:45.513914    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:45.513987    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:45.528298    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:45.528372    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:45.538969    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:45.539045    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:45.553596    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:45.553663    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:45.565918    3939 logs.go:276] 0 containers: []
	W0827 15:28:45.565930    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:45.565991    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:45.576647    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:45.576666    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:45.576672    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:45.589985    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:45.589997    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:45.602298    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:45.602309    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:45.618075    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:45.618087    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:45.632481    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:45.632491    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:45.671314    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:45.671327    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:45.689700    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:45.689713    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:45.726182    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:45.726193    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:45.739973    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:45.739986    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:45.744566    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:45.744576    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:45.782504    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:45.782516    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:45.793966    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:45.793978    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:45.818680    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:45.818691    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:45.831219    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:45.831234    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:45.846542    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:45.846553    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:45.864242    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:45.864252    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:48.377818    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:50.648206    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:50.648410    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:50.673687    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:50.673782    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:50.688259    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:50.688328    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:50.700061    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:50.700138    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:50.711394    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:50.711454    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:50.721663    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:50.721734    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:50.731921    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:50.731990    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:50.742154    3801 logs.go:276] 0 containers: []
	W0827 15:28:50.742168    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:50.742233    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:50.752717    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:50.752732    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:50.752737    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:50.767151    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:50.767163    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:50.783943    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:50.783954    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:28:50.801368    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:50.801381    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:50.834874    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:50.834884    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:50.838961    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:50.838969    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:50.874105    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:50.874120    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:50.888087    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:50.888100    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:50.899341    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:50.899353    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:50.910947    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:50.910956    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:50.922237    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:50.922248    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:50.933141    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:50.933151    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:50.957088    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:50.957098    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:53.380015    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:53.380193    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:53.398906    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:53.399010    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:53.413124    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:53.413200    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:53.427436    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:53.427503    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:53.438498    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:53.438575    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:53.448492    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:53.448557    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:53.458633    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:53.458700    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:53.468814    3939 logs.go:276] 0 containers: []
	W0827 15:28:53.468827    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:53.468888    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:53.479365    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:53.479382    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:53.479387    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:53.499810    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:53.499821    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:53.514333    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:53.514343    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:53.526861    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:53.526871    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:53.538875    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:53.538885    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:53.563699    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:53.563708    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:53.603358    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:53.603368    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:53.616363    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:53.616373    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:53.654299    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:53.654315    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:53.668690    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:53.668703    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:53.679892    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:53.679902    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:53.697340    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:53.697350    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:53.708971    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:53.708981    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:53.742995    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:53.743007    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:53.757508    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:53.757521    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:53.761634    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:53.761640    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:53.469410    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:56.282074    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:58.470003    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:58.470483    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:58.509316    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:28:58.509451    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:58.531760    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:28:58.531861    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:58.547468    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:28:58.547554    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:58.561581    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:28:58.561657    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:58.573174    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:28:58.573247    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:58.584158    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:28:58.584240    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:58.594619    3801 logs.go:276] 0 containers: []
	W0827 15:28:58.594633    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:58.594682    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:58.606478    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:28:58.606492    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:28:58.606497    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:28:58.618221    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:58.618232    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:58.643006    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:28:58.643018    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:58.654746    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:58.654757    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:58.690241    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:28:58.690249    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:28:58.704767    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:28:58.704777    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:28:58.718703    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:28:58.718714    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:28:58.730869    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:28:58.730880    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:28:58.742987    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:58.742997    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:58.747384    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:58.747393    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:58.781655    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:28:58.781670    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:28:58.793653    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:28:58.793666    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:28:58.808470    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:28:58.808481    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:01.328690    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:01.284264    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:01.284411    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:01.298266    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:01.298342    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:01.309256    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:01.309327    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:01.319316    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:01.319376    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:01.329904    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:01.329963    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:01.339776    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:01.339833    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:01.350298    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:01.350364    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:01.360809    3939 logs.go:276] 0 containers: []
	W0827 15:29:01.360820    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:01.360871    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:01.371409    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:01.371427    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:01.371434    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:01.386596    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:01.386608    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:01.421928    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:01.421941    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:01.435018    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:01.435031    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:01.446619    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:01.446633    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:01.485365    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:01.485374    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:01.489268    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:01.489276    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:01.503660    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:01.503670    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:01.544318    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:01.544328    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:01.556659    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:01.556671    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:01.569534    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:01.569545    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:01.586421    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:01.586432    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:01.597911    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:01.597922    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:01.609744    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:01.609754    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:01.627232    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:01.627248    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:01.651274    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:01.651289    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:04.164913    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:06.329815    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:06.329920    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:06.341104    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:06.341177    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:06.352293    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:06.352360    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:06.362917    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:06.362988    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:06.374031    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:06.374100    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:06.384153    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:06.384227    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:06.395306    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:06.395373    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:06.405342    3801 logs.go:276] 0 containers: []
	W0827 15:29:06.405353    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:06.405409    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:06.416004    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:06.416027    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:06.416034    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:06.451368    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:06.451388    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:06.470795    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:06.470806    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:06.481904    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:06.481915    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:06.493565    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:06.493575    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:06.519034    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:06.519044    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:06.524076    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:06.524084    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:06.559049    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:06.559061    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:06.573627    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:06.573637    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:06.585708    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:06.585720    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:06.607075    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:06.607087    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:06.618928    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:06.618939    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:06.636593    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:06.636605    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:09.167007    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:09.167131    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:09.185118    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:09.185219    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:09.202155    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:09.202228    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:09.213775    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:09.213845    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:09.227946    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:09.228023    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:09.238450    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:09.238511    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:09.248675    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:09.248775    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:09.263122    3939 logs.go:276] 0 containers: []
	W0827 15:29:09.263136    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:09.263190    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:09.273795    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:09.273811    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:09.273818    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:09.288682    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:09.288693    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:09.302265    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:09.302275    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:09.316387    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:09.316401    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:09.328312    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:09.328347    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:09.339869    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:09.339883    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:09.351842    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:09.351856    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:09.369358    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:09.369368    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:09.380428    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:09.380438    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:09.392170    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:09.392181    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:09.404988    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:09.405000    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:09.150469    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:09.443426    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:09.443435    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:09.448079    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:09.448087    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:09.483874    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:09.483888    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:09.498208    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:09.498221    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:09.535966    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:09.535978    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:12.061200    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:14.152607    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:14.152711    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:14.163438    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:14.163510    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:14.177220    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:14.177286    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:14.188090    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:14.188151    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:14.200254    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:14.200322    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:14.211474    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:14.211540    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:14.222577    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:14.222653    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:14.233713    3801 logs.go:276] 0 containers: []
	W0827 15:29:14.233725    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:14.233782    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:14.244097    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:14.244115    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:14.244120    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:14.257993    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:14.258006    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:14.276339    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:14.276348    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:14.310692    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:14.310702    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:14.325977    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:14.325988    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:14.337581    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:14.337593    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:14.349291    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:14.349301    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:14.363801    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:14.363811    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:14.378978    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:14.378988    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:14.390027    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:14.390037    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:14.415034    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:14.415042    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:14.448986    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:14.448996    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:14.453263    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:14.453269    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:16.966823    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:17.061393    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:17.061566    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:17.089458    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:17.089545    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:17.103108    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:17.103178    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:17.114427    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:17.114492    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:17.125001    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:17.125078    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:17.137964    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:17.138035    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:17.148156    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:17.148221    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:17.158226    3939 logs.go:276] 0 containers: []
	W0827 15:29:17.158237    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:17.158295    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:17.169417    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:17.169433    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:17.169439    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:17.184293    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:17.184304    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:17.208721    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:17.208731    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:17.226121    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:17.226132    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:17.238456    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:17.238470    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:17.249758    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:17.249769    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:17.285933    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:17.285943    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:17.324068    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:17.324083    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:17.337887    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:17.337898    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:17.349806    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:17.349817    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:17.353908    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:17.353916    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:17.372173    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:17.372187    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:17.384144    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:17.384153    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:17.396030    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:17.396042    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:17.408045    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:17.408056    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:17.446885    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:17.446898    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:21.967278    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:21.967645    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:22.004630    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:22.004771    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:22.024921    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:22.025012    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:22.040444    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:22.040514    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:22.058377    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:22.058449    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:22.069154    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:22.069231    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:22.080263    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:22.080333    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:22.090908    3801 logs.go:276] 0 containers: []
	W0827 15:29:22.090925    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:22.090989    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:22.104038    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:22.104054    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:22.104058    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:22.115943    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:22.115957    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:22.133738    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:22.133748    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:22.167202    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:22.167210    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:22.171738    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:22.171745    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:22.205561    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:22.205575    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:22.223509    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:22.223522    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:22.236542    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:22.236552    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:22.248822    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:22.248832    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:22.262773    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:22.262785    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:22.281794    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:22.281805    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:22.293755    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:22.293768    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:22.317344    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:22.317352    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:19.967227    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:24.831463    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:24.968875    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:24.969098    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:24.998202    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:24.998326    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:25.016813    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:25.016898    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:25.030596    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:25.030669    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:25.046490    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:25.046555    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:25.056953    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:25.057022    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:25.067588    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:25.067654    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:25.083703    3939 logs.go:276] 0 containers: []
	W0827 15:29:25.083717    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:25.083778    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:25.095856    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:25.095874    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:25.095880    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:25.134575    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:25.134584    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:25.152236    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:25.152247    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:25.168361    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:25.168373    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:25.185776    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:25.185787    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:25.198281    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:25.198294    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:25.203066    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:25.203073    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:25.218772    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:25.218783    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:25.253722    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:25.253736    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:25.269388    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:25.269401    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:25.310515    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:25.310527    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:25.322345    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:25.322360    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:25.333591    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:25.333604    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:25.352101    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:25.352115    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:25.364015    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:25.364026    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:25.375741    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:25.375753    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:27.902173    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:29.833550    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:29.833663    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:29.847121    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:29.847201    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:29.861431    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:29.861499    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:29.872065    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:29.872137    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:29.882780    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:29.882853    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:29.893113    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:29.893183    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:29.905317    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:29.905387    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:29.915436    3801 logs.go:276] 0 containers: []
	W0827 15:29:29.915453    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:29.915516    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:29.937288    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:29.937303    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:29.937308    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:29.948753    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:29.948764    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:29.960215    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:29.960227    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:29.983305    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:29.983316    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:29.994920    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:29.994932    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:30.013499    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:30.013509    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:30.018210    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:30.018216    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:30.054970    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:30.054986    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:30.069760    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:30.069770    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:30.081785    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:30.081797    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:30.106666    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:30.106676    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:30.142092    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:30.142104    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:30.156070    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:30.156082    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:32.669892    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:32.904660    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:32.904860    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:32.923717    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:32.923811    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:32.937385    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:32.937459    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:32.948560    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:32.948626    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:32.958659    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:32.958731    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:32.969327    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:32.969397    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:32.979606    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:32.979677    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:32.989425    3939 logs.go:276] 0 containers: []
	W0827 15:29:32.989436    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:32.989495    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:33.000118    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:33.000134    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:33.000139    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:33.012934    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:33.012945    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:33.025609    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:33.025620    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:33.037664    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:33.037674    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:33.075180    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:33.075191    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:33.090001    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:33.090014    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:33.101797    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:33.101808    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:33.113608    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:33.113622    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:33.125600    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:33.125611    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:33.143183    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:33.143195    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:33.147747    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:33.147754    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:33.161843    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:33.161857    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:33.178849    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:33.178860    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:33.202105    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:33.202114    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:33.238625    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:33.238634    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:33.250035    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:33.250046    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:37.672094    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:37.672346    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:37.693969    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:37.694084    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:37.710980    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:37.711057    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:37.723165    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:37.723236    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:37.734082    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:37.734144    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:37.744984    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:37.745050    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:37.755766    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:37.755834    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:37.766003    3801 logs.go:276] 0 containers: []
	W0827 15:29:37.766014    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:37.766072    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:37.776405    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:37.776421    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:37.776426    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:37.788462    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:37.788474    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:37.800265    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:37.800276    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:37.814721    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:37.814731    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:37.849140    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:37.849151    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:37.867648    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:37.867662    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:37.883101    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:37.883111    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:37.894763    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:37.894775    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:37.913078    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:37.913088    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:37.925166    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:37.925176    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:37.948483    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:37.948491    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:37.960067    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:37.960078    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:37.993187    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:37.993195    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:35.788545    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:40.499854    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:40.790657    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:40.790801    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:40.807438    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:40.807533    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:40.819676    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:40.819748    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:40.830314    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:40.830399    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:40.846279    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:40.846354    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:40.856919    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:40.856987    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:40.867897    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:40.867964    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:40.877915    3939 logs.go:276] 0 containers: []
	W0827 15:29:40.877926    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:40.877988    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:40.888710    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:40.888726    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:40.888732    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:40.893009    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:40.893018    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:40.929854    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:40.929868    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:40.941071    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:40.941083    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:40.954212    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:40.954229    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:40.995560    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:40.995568    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:41.008425    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:41.008437    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:41.020032    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:41.020044    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:41.042841    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:41.042849    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:41.054453    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:41.054463    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:41.066964    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:41.066978    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:41.085513    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:41.085522    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:41.122185    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:41.122197    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:41.135970    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:41.135984    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:41.150090    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:41.150100    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:41.162048    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:41.162058    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:43.678533    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:45.502307    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:45.502543    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:45.529715    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:45.529851    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:45.547377    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:45.547462    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:45.561821    3801 logs.go:276] 2 containers: [bacf943f7873 fb03113f9fbd]
	I0827 15:29:45.561893    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:45.573107    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:45.573165    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:45.583125    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:45.583204    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:45.593283    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:45.593343    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:45.603353    3801 logs.go:276] 0 containers: []
	W0827 15:29:45.603366    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:45.603427    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:45.614255    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:45.614268    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:45.614273    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:45.628487    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:45.628500    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:45.639919    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:45.639933    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:45.651525    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:45.651537    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:45.668713    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:45.668723    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:45.680234    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:45.680244    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:45.703611    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:45.703619    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:45.715783    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:45.715794    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:45.720866    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:45.720873    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:45.756023    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:45.756035    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:45.774466    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:45.774476    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:45.785963    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:45.785975    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:45.828150    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:45.828162    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:48.680973    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:48.681097    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:48.692975    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:48.693049    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:48.703593    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:48.703673    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:48.714255    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:48.714315    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:48.730002    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:48.730079    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:48.741113    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:48.741184    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:48.751270    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:48.751336    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:48.761475    3939 logs.go:276] 0 containers: []
	W0827 15:29:48.761485    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:48.761535    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:48.772003    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:48.772022    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:48.772027    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:48.786023    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:48.786033    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:48.800192    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:48.800204    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:48.814072    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:48.814084    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:48.828885    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:48.828896    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:48.852544    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:48.852552    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:48.869928    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:48.869939    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:48.882366    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:48.882376    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:48.893882    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:48.893893    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:48.906058    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:48.906071    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:48.945568    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:48.945580    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:48.957531    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:48.957545    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:48.968786    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:48.968795    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:48.980250    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:48.980260    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:48.984308    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:48.984317    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:49.020274    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:49.020286    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:48.365238    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:51.560596    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:53.367314    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:53.367522    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:53.392861    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:29:53.392979    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:53.410826    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:29:53.410915    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:53.425208    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:29:53.425292    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:53.437390    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:29:53.437449    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:53.451648    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:29:53.451720    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:53.462094    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:29:53.462158    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:53.471920    3801 logs.go:276] 0 containers: []
	W0827 15:29:53.471933    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:53.471986    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:53.482494    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:29:53.482512    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:29:53.482517    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:29:53.497624    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:53.497636    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:53.502134    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:29:53.502141    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:29:53.516545    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:29:53.516557    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:29:53.528846    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:29:53.528858    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:29:53.540790    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:29:53.540801    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:29:53.552925    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:29:53.552936    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:29:53.566543    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:53.566557    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:53.590765    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:29:53.590781    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:53.604842    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:29:53.604857    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:29:53.616355    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:29:53.616367    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:29:53.635092    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:29:53.635106    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:29:53.652817    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:53.652831    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:53.688363    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:53.688371    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:53.722702    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:29:53.722720    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:56.236692    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:56.562392    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:56.562593    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:56.582109    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:56.582204    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:56.596590    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:56.596672    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:56.608960    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:56.609030    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:56.619985    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:56.620058    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:56.630754    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:56.630821    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:56.645288    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:56.645354    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:56.655372    3939 logs.go:276] 0 containers: []
	W0827 15:29:56.655384    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:56.655442    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:56.670373    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:56.670391    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:56.670397    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:56.682532    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:56.682546    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:56.694549    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:56.694564    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:56.706464    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:56.706479    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:56.741023    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:56.741038    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:56.754950    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:56.754961    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:56.769026    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:56.769039    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:56.783229    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:56.783243    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:56.794493    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:56.794504    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:56.812461    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:56.812471    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:56.830780    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:56.830791    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:56.855326    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:56.855334    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:56.869059    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:56.869072    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:56.881492    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:56.881505    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:56.919179    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:56.919190    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:56.957512    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:56.957523    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:01.238860    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:01.239060    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:01.257552    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:01.257657    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:01.272077    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:01.272154    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:01.284305    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:01.284384    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:01.295099    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:01.295180    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:01.305776    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:01.305856    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:01.316646    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:01.316722    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:01.327526    3801 logs.go:276] 0 containers: []
	W0827 15:30:01.327538    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:01.327607    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:01.338556    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:01.338574    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:01.338580    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:01.363855    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:01.363866    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:01.368825    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:01.368832    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:01.382562    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:01.382573    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:01.400109    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:01.400120    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:01.411747    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:01.411759    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:01.423683    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:01.423695    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:01.459234    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:01.459245    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:01.470479    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:01.470490    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:01.483442    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:01.483454    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:01.498849    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:01.498861    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:01.518370    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:01.518383    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:01.532061    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:01.532073    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:01.544716    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:01.544727    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:01.581378    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:01.581389    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:29:59.464052    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:04.096252    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:04.465105    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:04.465301    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:04.485917    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:04.486009    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:04.508018    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:04.508098    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:04.519585    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:04.519646    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:04.530388    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:04.530451    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:04.541094    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:04.541162    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:04.552140    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:04.552208    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:04.566589    3939 logs.go:276] 0 containers: []
	W0827 15:30:04.566601    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:04.566661    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:04.580088    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:04.580104    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:04.580109    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:04.592411    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:04.592423    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:04.604398    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:04.604409    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:04.642656    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:04.642669    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:04.685443    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:04.685456    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:04.700650    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:04.700665    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:04.714212    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:04.714229    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:04.737217    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:04.737226    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:04.772969    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:04.772979    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:04.787979    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:04.787992    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:04.801667    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:04.801676    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:04.819794    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:04.819806    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:04.823957    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:04.823965    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:04.835166    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:04.835180    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:04.846696    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:04.846708    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:04.858410    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:04.858423    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:07.375899    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:09.098445    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:09.098671    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:09.115975    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:09.116062    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:09.129403    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:09.129477    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:09.141620    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:09.141692    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:09.152273    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:09.152350    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:09.163130    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:09.163192    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:09.181047    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:09.181112    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:09.194719    3801 logs.go:276] 0 containers: []
	W0827 15:30:09.194731    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:09.194788    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:09.205311    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:09.205341    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:09.205346    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:09.240687    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:09.240701    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:09.255204    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:09.255216    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:09.270722    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:09.270736    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:09.281869    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:09.281880    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:09.296411    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:09.296421    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:09.330685    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:09.330699    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:09.335616    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:09.335623    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:09.347559    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:09.347570    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:09.359783    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:09.359793    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:09.376322    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:09.376332    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:09.388471    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:09.388481    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:09.406023    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:09.406034    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:09.427358    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:09.427371    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:09.452514    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:09.452523    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:11.966434    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:12.376738    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:12.376949    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:12.410543    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:12.410647    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:12.436248    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:12.436319    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:12.450776    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:12.450846    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:12.466072    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:12.466144    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:12.476458    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:12.476523    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:12.487150    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:12.487219    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:12.498513    3939 logs.go:276] 0 containers: []
	W0827 15:30:12.498524    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:12.498581    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:12.515698    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:12.515717    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:12.515723    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:12.532320    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:12.532331    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:12.550239    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:12.550251    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:12.562224    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:12.562235    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:12.598332    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:12.598346    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:12.612755    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:12.612765    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:12.624327    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:12.624342    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:12.637994    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:12.638007    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:12.650520    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:12.650530    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:12.673533    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:12.673543    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:12.687286    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:12.687301    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:12.702622    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:12.702633    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:12.717457    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:12.717472    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:12.729821    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:12.729837    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:12.769634    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:12.769645    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:12.774271    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:12.774277    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:16.967055    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:16.967228    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:16.983607    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:16.983706    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:16.995907    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:16.995977    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:17.007545    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:17.007618    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:17.019012    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:17.019080    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:17.029541    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:17.029606    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:17.048500    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:17.048565    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:17.058598    3801 logs.go:276] 0 containers: []
	W0827 15:30:17.058609    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:17.058658    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:17.068819    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:17.068838    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:17.068843    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:17.086837    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:17.086848    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:17.098297    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:17.098308    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:17.114992    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:17.115004    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:17.129851    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:17.129862    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:17.141508    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:17.141519    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:17.175729    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:17.175736    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:17.210367    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:17.210378    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:17.222272    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:17.222282    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:17.234625    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:17.234636    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:17.260161    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:17.260169    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:17.264257    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:17.264264    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:17.282955    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:17.282965    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:17.296742    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:17.296752    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:17.307772    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:17.307784    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:15.316347    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:19.821919    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:20.318622    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:20.318804    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:20.338154    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:20.338250    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:20.352326    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:20.352402    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:20.364431    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:20.364524    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:20.375450    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:20.375523    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:20.390583    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:20.390649    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:20.401244    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:20.401312    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:20.411211    3939 logs.go:276] 0 containers: []
	W0827 15:30:20.411221    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:20.411282    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:20.421331    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:20.421347    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:20.421352    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:20.459046    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:20.459060    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:20.475271    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:20.475285    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:20.487135    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:20.487145    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:20.500839    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:20.500852    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:20.524066    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:20.524072    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:20.561211    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:20.561223    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:20.573747    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:20.573760    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:20.592272    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:20.592283    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:20.606294    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:20.606305    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:20.623858    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:20.623871    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:20.635931    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:20.635943    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:20.647832    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:20.647842    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:20.651867    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:20.651874    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:20.694574    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:20.694587    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:20.708966    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:20.708979    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:23.226211    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:24.824132    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:24.824268    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:24.837470    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:24.837539    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:24.848001    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:24.848072    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:24.858432    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:24.858503    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:24.873168    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:24.873246    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:24.883359    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:24.883432    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:24.893721    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:24.893784    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:24.903997    3801 logs.go:276] 0 containers: []
	W0827 15:30:24.904007    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:24.904067    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:24.914049    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:24.914068    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:24.914074    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:24.949541    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:24.949555    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:24.961531    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:24.961544    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:24.973547    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:24.973558    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:25.007415    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:25.007423    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:25.022460    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:25.022471    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:25.034498    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:25.034511    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:25.049248    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:25.049261    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:25.061078    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:25.061089    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:25.080924    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:25.080934    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:25.098203    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:25.098216    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:25.122028    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:25.122037    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:25.135089    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:25.135102    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:25.139586    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:25.139594    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:25.151203    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:25.151215    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:27.664685    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:28.228062    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:28.228207    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:28.241892    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:28.241977    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:28.253478    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:28.253552    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:28.264586    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:28.264653    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:28.274829    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:28.274895    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:28.284981    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:28.285047    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:28.296111    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:28.296193    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:28.307586    3939 logs.go:276] 0 containers: []
	W0827 15:30:28.307599    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:28.307662    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:28.317995    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:28.318026    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:28.318032    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:28.352931    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:28.352945    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:28.364485    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:28.364498    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:28.405324    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:28.405338    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:28.418885    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:28.418897    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:28.440495    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:28.440506    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:28.452947    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:28.452961    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:28.467876    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:28.467889    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:28.478750    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:28.478762    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:28.496580    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:28.496591    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:28.508243    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:28.508257    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:28.548205    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:28.548224    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:28.553050    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:28.553059    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:28.567533    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:28.567547    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:28.579917    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:28.579928    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:28.591784    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:28.591796    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:32.666801    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:32.666908    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:32.677924    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:32.678000    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:32.688805    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:32.688874    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:32.700138    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:32.700205    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:32.710594    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:32.710662    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:32.724049    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:32.724113    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:32.734555    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:32.734615    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:32.744827    3801 logs.go:276] 0 containers: []
	W0827 15:30:32.744838    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:32.744887    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:32.755174    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:32.755192    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:32.755198    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:32.766814    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:32.766828    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:32.778516    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:32.778528    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:32.793118    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:32.793129    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:32.804383    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:32.804394    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:32.821095    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:32.821109    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:32.855024    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:32.855035    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:32.867548    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:32.867562    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:32.883139    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:32.883151    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:32.898168    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:32.898180    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:32.924846    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:32.924861    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:32.936622    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:32.936635    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:32.941403    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:32.941410    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:32.955796    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:32.955806    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:32.980964    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:32.980972    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:31.106832    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:35.520727    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:36.108963    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:36.109137    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:36.121620    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:36.121701    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:36.132891    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:36.132958    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:36.143614    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:36.143675    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:36.154065    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:36.154136    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:36.164966    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:36.165029    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:36.175981    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:36.176055    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:36.186107    3939 logs.go:276] 0 containers: []
	W0827 15:30:36.186121    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:36.186184    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:36.196305    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:36.196327    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:36.196333    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:36.235748    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:36.235757    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:36.271136    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:36.271147    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:36.283155    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:36.283169    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:36.287095    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:36.287101    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:36.301683    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:36.301698    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:36.316157    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:36.316170    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:36.327943    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:36.327957    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:36.339043    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:36.339057    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:36.350400    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:36.350412    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:36.362444    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:36.362457    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:36.379698    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:36.379710    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:36.392955    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:36.392967    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:36.416604    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:36.416613    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:36.453798    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:36.453810    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:36.468720    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:36.468730    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:38.983089    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:40.522587    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:40.522843    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:40.550903    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:40.551025    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:40.570593    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:40.570675    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:40.583249    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:40.583320    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:40.594535    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:40.594605    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:40.604533    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:40.604603    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:40.615520    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:40.615587    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:40.625735    3801 logs.go:276] 0 containers: []
	W0827 15:30:40.625751    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:40.625799    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:40.640220    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:40.640237    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:40.640244    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:40.675863    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:40.675874    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:40.694904    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:40.694915    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:40.713578    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:40.713588    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:40.725440    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:40.725451    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:40.737376    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:40.737389    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:40.749082    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:40.749094    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:40.761191    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:40.761202    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:40.787226    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:40.787244    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:40.801316    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:40.801332    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:40.812933    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:40.812943    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:40.825491    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:40.825503    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:40.837445    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:40.837457    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:40.864743    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:40.864755    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:40.899610    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:40.899618    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:43.985506    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:43.985653    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:44.000611    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:44.000684    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:44.012472    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:44.012541    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:44.023173    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:44.023243    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:44.033802    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:44.033867    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:44.044482    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:44.044554    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:44.055862    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:44.055926    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:44.070736    3939 logs.go:276] 0 containers: []
	W0827 15:30:44.070747    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:44.070810    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:44.080856    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:44.080874    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:44.080879    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:44.117082    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:44.117093    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:44.134661    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:44.134671    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:44.146063    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:44.146073    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:44.150024    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:44.150030    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:44.162832    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:44.162842    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:44.184585    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:44.184594    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:44.221749    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:44.221762    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:44.259136    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:44.259149    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:44.273251    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:44.273261    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:44.287561    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:44.287576    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:44.299143    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:44.299153    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:44.310622    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:44.310635    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:44.322098    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:44.322109    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:44.335925    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:44.335937    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:44.376113    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:44.376126    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:43.406253    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:46.888624    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:51.890926    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:51.891029    3939 kubeadm.go:597] duration metric: took 4m3.603192792s to restartPrimaryControlPlane
	W0827 15:30:51.891131    3939 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0827 15:30:51.891180    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0827 15:30:52.914460    3939 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.023298167s)
	I0827 15:30:52.914541    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 15:30:52.919368    3939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:30:52.922116    3939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:30:52.924913    3939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 15:30:52.924919    3939 kubeadm.go:157] found existing configuration files:
	
	I0827 15:30:52.924946    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf
	I0827 15:30:52.927792    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 15:30:52.927819    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:30:52.930822    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf
	I0827 15:30:52.933564    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 15:30:52.933596    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:30:52.936579    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf
	I0827 15:30:52.939597    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 15:30:52.939620    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:30:52.942476    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf
	I0827 15:30:52.945066    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 15:30:52.945085    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:30:52.948263    3939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 15:30:52.967538    3939 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0827 15:30:52.967683    3939 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 15:30:53.015345    3939 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 15:30:53.015404    3939 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 15:30:53.015458    3939 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 15:30:53.070279    3939 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 15:30:48.408570    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:48.408788    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:48.430818    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:48.430937    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:48.446613    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:48.446689    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:48.459674    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:48.459744    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:48.470947    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:48.471009    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:48.486479    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:48.486550    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:48.496738    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:48.496806    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:48.507130    3801 logs.go:276] 0 containers: []
	W0827 15:30:48.507142    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:48.507203    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:48.517796    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:48.517813    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:48.517818    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:48.542518    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:48.542527    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:48.557052    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:48.557067    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:48.561353    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:48.561362    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:48.596820    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:48.596832    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:48.610180    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:48.610193    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:48.623423    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:48.623436    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:48.641072    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:48.641082    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:48.655107    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:48.655117    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:48.667671    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:48.667683    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:48.689443    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:48.689462    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:48.707880    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:48.707892    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:48.743396    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:48.743407    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:48.755239    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:48.755250    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:48.768020    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:48.768034    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:51.285094    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:53.075322    3939 out.go:235]   - Generating certificates and keys ...
	I0827 15:30:53.075364    3939 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 15:30:53.075401    3939 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 15:30:53.075442    3939 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 15:30:53.075474    3939 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 15:30:53.075513    3939 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 15:30:53.075542    3939 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 15:30:53.075579    3939 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 15:30:53.075614    3939 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 15:30:53.075653    3939 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 15:30:53.075689    3939 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 15:30:53.075708    3939 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 15:30:53.075747    3939 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 15:30:53.173526    3939 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 15:30:53.273948    3939 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 15:30:53.305423    3939 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 15:30:53.590267    3939 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 15:30:53.620918    3939 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 15:30:53.621269    3939 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 15:30:53.621318    3939 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 15:30:53.703145    3939 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 15:30:53.708327    3939 out.go:235]   - Booting up control plane ...
	I0827 15:30:53.708379    3939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 15:30:53.708424    3939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 15:30:53.708465    3939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 15:30:53.708510    3939 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 15:30:53.708594    3939 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 15:30:56.287290    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:56.287407    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:56.298833    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:30:56.298908    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:56.310039    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:30:56.310129    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:56.321403    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:30:56.321472    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:56.332280    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:30:56.332358    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:56.343410    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:30:56.343480    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:56.354307    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:30:56.354378    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:56.365138    3801 logs.go:276] 0 containers: []
	W0827 15:30:56.365149    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:56.365214    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:56.376063    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:30:56.376081    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:30:56.376086    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:30:56.391557    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:30:56.391568    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:30:56.409285    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:30:56.409296    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:30:56.421972    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:56.421983    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:56.446532    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:30:56.446552    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:30:56.462132    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:30:56.462142    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:56.474298    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:56.474312    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:56.479388    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:56.479400    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:56.516686    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:30:56.516699    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:30:56.529168    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:30:56.529180    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:30:56.541802    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:30:56.541812    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:30:56.561051    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:30:56.561062    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:30:56.579871    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:56.579884    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:56.616301    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:30:56.616314    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:30:56.630559    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:30:56.630571    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:30:58.208215    3939 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501412 seconds
	I0827 15:30:58.208285    3939 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 15:30:58.212255    3939 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 15:30:58.719877    3939 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 15:30:58.720001    3939 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-443000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 15:30:59.223353    3939 kubeadm.go:310] [bootstrap-token] Using token: 7c6cpc.ok1xbhjqz814b55n
	I0827 15:30:59.229128    3939 out.go:235]   - Configuring RBAC rules ...
	I0827 15:30:59.229193    3939 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 15:30:59.229247    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 15:30:59.236841    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 15:30:59.237609    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 15:30:59.238404    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 15:30:59.239139    3939 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 15:30:59.242079    3939 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 15:30:59.393861    3939 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 15:30:59.627276    3939 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 15:30:59.627894    3939 kubeadm.go:310] 
	I0827 15:30:59.627927    3939 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 15:30:59.627955    3939 kubeadm.go:310] 
	I0827 15:30:59.627996    3939 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 15:30:59.628000    3939 kubeadm.go:310] 
	I0827 15:30:59.628015    3939 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 15:30:59.628062    3939 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 15:30:59.628094    3939 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 15:30:59.628122    3939 kubeadm.go:310] 
	I0827 15:30:59.628153    3939 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 15:30:59.628156    3939 kubeadm.go:310] 
	I0827 15:30:59.628186    3939 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 15:30:59.628191    3939 kubeadm.go:310] 
	I0827 15:30:59.628230    3939 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 15:30:59.628276    3939 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 15:30:59.628355    3939 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 15:30:59.628364    3939 kubeadm.go:310] 
	I0827 15:30:59.628425    3939 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 15:30:59.628468    3939 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 15:30:59.628470    3939 kubeadm.go:310] 
	I0827 15:30:59.628509    3939 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7c6cpc.ok1xbhjqz814b55n \
	I0827 15:30:59.628572    3939 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 \
	I0827 15:30:59.628592    3939 kubeadm.go:310] 	--control-plane 
	I0827 15:30:59.628594    3939 kubeadm.go:310] 
	I0827 15:30:59.628633    3939 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 15:30:59.628639    3939 kubeadm.go:310] 
	I0827 15:30:59.628700    3939 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7c6cpc.ok1xbhjqz814b55n \
	I0827 15:30:59.628757    3939 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 
	I0827 15:30:59.628914    3939 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 15:30:59.628925    3939 cni.go:84] Creating CNI manager for ""
	I0827 15:30:59.628934    3939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:30:59.633071    3939 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 15:30:59.638955    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 15:30:59.641962    3939 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 15:30:59.646706    3939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 15:30:59.646748    3939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 15:30:59.646749    3939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-443000 minikube.k8s.io/updated_at=2024_08_27T15_30_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=stopped-upgrade-443000 minikube.k8s.io/primary=true
	I0827 15:30:59.649827    3939 ops.go:34] apiserver oom_adj: -16
	I0827 15:30:59.689361    3939 kubeadm.go:1113] duration metric: took 42.647625ms to wait for elevateKubeSystemPrivileges
	I0827 15:30:59.689376    3939 kubeadm.go:394] duration metric: took 4m11.415037375s to StartCluster
	I0827 15:30:59.689386    3939 settings.go:142] acquiring lock: {Name:mk8039639095abb20902a2ce8e0a004770b18340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:30:59.689474    3939 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:30:59.689885    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:30:59.690100    3939 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:30:59.690109    3939 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 15:30:59.690149    3939 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-443000"
	I0827 15:30:59.690162    3939 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-443000"
	W0827 15:30:59.690167    3939 addons.go:243] addon storage-provisioner should already be in state true
	I0827 15:30:59.690179    3939 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0827 15:30:59.690182    3939 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-443000"
	I0827 15:30:59.690208    3939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-443000"
	I0827 15:30:59.690210    3939 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:30:59.691122    3939 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fdbeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:30:59.691237    3939 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-443000"
	W0827 15:30:59.691242    3939 addons.go:243] addon default-storageclass should already be in state true
	I0827 15:30:59.691250    3939 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0827 15:30:59.693001    3939 out.go:177] * Verifying Kubernetes components...
	I0827 15:30:59.693304    3939 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 15:30:59.697183    3939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 15:30:59.697190    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:30:59.700866    3939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:30:59.146328    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:59.704023    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:30:59.706957    3939 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:30:59.706964    3939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 15:30:59.706971    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:30:59.797021    3939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:30:59.803481    3939 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:30:59.803526    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:30:59.808073    3939 api_server.go:72] duration metric: took 117.96625ms to wait for apiserver process to appear ...
	I0827 15:30:59.808081    3939 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:30:59.808089    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:59.842517    3939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 15:30:59.858351    3939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:31:00.231185    3939 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 15:31:00.231198    3939 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 15:31:04.148463    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:04.148747    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:04.168581    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:04.168668    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:04.187037    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:04.187119    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:04.198934    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:04.199001    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:04.209479    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:04.209550    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:04.220320    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:04.220384    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:04.231229    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:04.231290    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:04.241928    3801 logs.go:276] 0 containers: []
	W0827 15:31:04.241938    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:04.241988    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:04.252069    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:04.252084    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:04.252090    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:04.266323    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:04.266332    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:04.281289    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:04.281299    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:04.305545    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:04.305553    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:04.309948    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:04.309955    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:04.322644    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:04.322656    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:04.334349    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:04.334365    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:04.345649    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:04.345660    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:04.357184    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:04.357195    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:04.374628    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:04.374639    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:04.386137    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:04.386148    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:04.421535    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:04.421545    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:04.457814    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:04.457827    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:04.472954    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:04.472965    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:04.486592    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:04.486603    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:06.999793    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:04.810068    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:04.810089    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:12.001872    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:12.001965    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:12.018444    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:12.018513    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:12.029436    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:12.029505    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:12.040735    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:12.040811    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:12.051506    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:12.051575    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:12.061904    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:12.061967    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:12.072627    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:12.072695    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:12.082835    3801 logs.go:276] 0 containers: []
	W0827 15:31:12.082845    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:12.082904    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:12.093726    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:12.093741    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:12.093746    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:12.108332    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:12.108342    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:12.119974    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:12.119987    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:12.139652    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:12.139662    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:12.151242    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:12.151252    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:12.155850    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:12.155858    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:12.167501    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:12.167512    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:12.184366    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:12.184377    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:12.195593    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:12.195603    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:12.213314    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:12.213325    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:12.225324    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:12.225335    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:12.260668    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:12.260675    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:12.295511    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:12.295521    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:12.310065    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:12.310078    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:12.334639    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:12.334656    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:09.810142    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:09.810194    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:14.854104    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:14.810340    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:14.810377    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:19.854818    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:19.854941    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:19.871478    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:19.871555    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:19.882618    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:19.882690    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:19.893463    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:19.893536    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:19.904582    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:19.904650    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:19.916102    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:19.916175    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:19.935730    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:19.935798    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:19.945797    3801 logs.go:276] 0 containers: []
	W0827 15:31:19.945807    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:19.945858    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:19.956642    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:19.956660    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:19.956666    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:19.968982    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:19.968993    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:19.980791    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:19.980800    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:19.992880    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:19.992890    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:20.004333    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:20.004345    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:20.018334    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:20.018353    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:20.030376    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:20.030387    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:20.042088    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:20.042100    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:20.057157    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:20.057168    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:20.075638    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:20.075647    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:20.080231    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:20.080238    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:20.112976    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:20.112984    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:20.126868    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:20.126879    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:20.139590    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:20.139603    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:20.164298    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:20.164306    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:22.705847    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:19.810688    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:19.810725    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:27.707911    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:27.708117    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:27.725657    3801 logs.go:276] 1 containers: [bf336df465bc]
	I0827 15:31:27.725746    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:27.739317    3801 logs.go:276] 1 containers: [c07f15b168a6]
	I0827 15:31:27.739392    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:27.750967    3801 logs.go:276] 4 containers: [f32903ed8e0c 0cdafa20fd0a bacf943f7873 fb03113f9fbd]
	I0827 15:31:27.751034    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:27.761454    3801 logs.go:276] 1 containers: [81f2d02be406]
	I0827 15:31:27.761526    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:27.771548    3801 logs.go:276] 1 containers: [d1373e4a45ba]
	I0827 15:31:27.771619    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:27.782805    3801 logs.go:276] 1 containers: [13a20142a2e0]
	I0827 15:31:27.782879    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:27.793156    3801 logs.go:276] 0 containers: []
	W0827 15:31:27.793169    3801 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:27.793224    3801 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:27.802936    3801 logs.go:276] 1 containers: [d20687948062]
	I0827 15:31:27.802955    3801 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:31:27.802960    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:31:27.838959    3801 logs.go:123] Gathering logs for kube-proxy [d1373e4a45ba] ...
	I0827 15:31:27.838972    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1373e4a45ba"
	I0827 15:31:27.851529    3801 logs.go:123] Gathering logs for kube-controller-manager [13a20142a2e0] ...
	I0827 15:31:27.851540    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13a20142a2e0"
	I0827 15:31:27.871838    3801 logs.go:123] Gathering logs for container status ...
	I0827 15:31:27.871852    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:27.883195    3801 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:27.883207    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:27.887683    3801 logs.go:123] Gathering logs for etcd [c07f15b168a6] ...
	I0827 15:31:27.887690    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c07f15b168a6"
	I0827 15:31:27.903222    3801 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:27.903233    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:31:27.938918    3801 logs.go:123] Gathering logs for kube-scheduler [81f2d02be406] ...
	I0827 15:31:27.938930    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81f2d02be406"
	I0827 15:31:27.954706    3801 logs.go:123] Gathering logs for Docker ...
	I0827 15:31:27.954717    3801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:31:27.978820    3801 logs.go:123] Gathering logs for coredns [fb03113f9fbd] ...
	I0827 15:31:27.978834    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb03113f9fbd"
	I0827 15:31:27.990607    3801 logs.go:123] Gathering logs for storage-provisioner [d20687948062] ...
	I0827 15:31:27.990621    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d20687948062"
	I0827 15:31:28.002268    3801 logs.go:123] Gathering logs for kube-apiserver [bf336df465bc] ...
	I0827 15:31:28.002283    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf336df465bc"
	I0827 15:31:28.016307    3801 logs.go:123] Gathering logs for coredns [f32903ed8e0c] ...
	I0827 15:31:28.016316    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f32903ed8e0c"
	I0827 15:31:28.027947    3801 logs.go:123] Gathering logs for coredns [0cdafa20fd0a] ...
	I0827 15:31:28.027956    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cdafa20fd0a"
	I0827 15:31:28.039281    3801 logs.go:123] Gathering logs for coredns [bacf943f7873] ...
	I0827 15:31:28.039294    3801 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bacf943f7873"
	I0827 15:31:24.811175    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:24.811208    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:29.811773    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:29.811801    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0827 15:31:30.232545    3939 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0827 15:31:30.237817    3939 out.go:177] * Enabled addons: storage-provisioner
	I0827 15:31:30.551913    3801 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:30.249609    3939 addons.go:510] duration metric: took 30.560510417s for enable addons: enabled=[storage-provisioner]
	I0827 15:31:35.554233    3801 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:35.558832    3801 out.go:201] 
	W0827 15:31:35.562973    3801 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0827 15:31:35.562991    3801 out.go:270] * 
	W0827 15:31:35.564147    3801 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:31:35.574898    3801 out.go:201] 
	I0827 15:31:34.812567    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:34.812627    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:39.813789    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:39.813825    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:44.815224    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:44.815260    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-08-27 22:22:36 UTC, ends at Tue 2024-08-27 22:31:51 UTC. --
	Aug 27 22:31:36 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:36Z" level=error msg="ContainerStats resp: {0x400083cc00 linux}"
	Aug 27 22:31:36 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:36Z" level=error msg="ContainerStats resp: {0x400039ff40 linux}"
	Aug 27 22:31:36 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:36Z" level=error msg="ContainerStats resp: {0x40008881c0 linux}"
	Aug 27 22:31:36 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:36Z" level=error msg="ContainerStats resp: {0x4000199080 linux}"
	Aug 27 22:31:37 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:37Z" level=error msg="ContainerStats resp: {0x4000658080 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x40007d8e40 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x400091ed80 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x40007d8040 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x400091e580 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x400091f3c0 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x40007d85c0 linux}"
	Aug 27 22:31:38 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:38Z" level=error msg="ContainerStats resp: {0x400091fe80 linux}"
	Aug 27 22:31:43 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 27 22:31:48 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 27 22:31:48 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:48Z" level=error msg="ContainerStats resp: {0x40007b8d40 linux}"
	Aug 27 22:31:48 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:48Z" level=error msg="ContainerStats resp: {0x400072a7c0 linux}"
	Aug 27 22:31:49 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:49Z" level=error msg="ContainerStats resp: {0x40004ff9c0 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x400083e780 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x400083eb40 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x40001c8a80 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x400083f800 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x40001c9200 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x400083fe80 linux}"
	Aug 27 22:31:50 running-upgrade-301000 cri-dockerd[3080]: time="2024-08-27T22:31:50Z" level=error msg="ContainerStats resp: {0x4000898180 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ae889d778fa78       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   0a7663497c672
	db7bfdea5264f       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   b069176e9333e
	f32903ed8e0cf       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0a7663497c672
	0cdafa20fd0a0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b069176e9333e
	d1373e4a45ba5       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   408e72168e7d0
	d20687948062c       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   9ffbe8c4816c8
	c07f15b168a6b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   dd7bc56207b21
	bf336df465bce       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   e0cdaaed2f028
	81f2d02be4061       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e29969a62b895
	13a20142a2e03       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   65ab41853da48
	
	
	==> coredns [0cdafa20fd0a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:54905->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:55773->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:56808->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:39391->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:35994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:60033->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:59175->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:38810->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:42960->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6911927998193068034.8099382348852090390. HINFO: read udp 10.244.0.3:54971->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ae889d778fa7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7299516118048763656.7751431554458810276. HINFO: read udp 10.244.0.2:41022->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7299516118048763656.7751431554458810276. HINFO: read udp 10.244.0.2:45149->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7299516118048763656.7751431554458810276. HINFO: read udp 10.244.0.2:46260->10.0.2.3:53: i/o timeout
	
	
	==> coredns [db7bfdea5264] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1838328997223947638.5517820705289557318. HINFO: read udp 10.244.0.3:35082->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1838328997223947638.5517820705289557318. HINFO: read udp 10.244.0.3:40927->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1838328997223947638.5517820705289557318. HINFO: read udp 10.244.0.3:33667->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f32903ed8e0c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6101039564432397773.7430310292892757777. HINFO: read udp 10.244.0.2:42128->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6101039564432397773.7430310292892757777. HINFO: read udp 10.244.0.2:45687->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6101039564432397773.7430310292892757777. HINFO: read udp 10.244.0.2:35202->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-301000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-301000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=running-upgrade-301000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T15_27_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:27:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-301000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:31:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:27:34 +0000   Tue, 27 Aug 2024 22:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:27:34 +0000   Tue, 27 Aug 2024 22:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:27:34 +0000   Tue, 27 Aug 2024 22:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:27:34 +0000   Tue, 27 Aug 2024 22:27:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-301000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 c808a4b941a446869306f800b4d4011c
	  System UUID:                c808a4b941a446869306f800b4d4011c
	  Boot ID:                    298ddcaa-d0b6-4681-bd65-0b720e589470
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2fxwm                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 coredns-6d4b75cb6d-68f49                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 etcd-running-upgrade-301000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-301000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-301000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-86l5x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-301000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-301000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-301000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-301000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-301000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m6s   node-controller  Node running-upgrade-301000 event: Registered Node running-upgrade-301000 in Controller
	
	
	==> dmesg <==
	[  +1.662526] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.058716] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.074517] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.135329] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.080728] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.057567] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.251877] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[Aug27 22:23] systemd-fstab-generator[1951]: Ignoring "noauto" for root device
	[  +2.476660] systemd-fstab-generator[2219]: Ignoring "noauto" for root device
	[  +0.187016] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.078340] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[  +0.082575] systemd-fstab-generator[2282]: Ignoring "noauto" for root device
	[ +12.605489] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.202571] systemd-fstab-generator[3035]: Ignoring "noauto" for root device
	[  +0.071062] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +0.055465] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
	[  +0.077058] systemd-fstab-generator[3073]: Ignoring "noauto" for root device
	[  +2.259891] systemd-fstab-generator[3223]: Ignoring "noauto" for root device
	[  +2.743073] systemd-fstab-generator[3595]: Ignoring "noauto" for root device
	[  +1.015835] systemd-fstab-generator[3723]: Ignoring "noauto" for root device
	[ +17.903369] kauditd_printk_skb: 68 callbacks suppressed
	[Aug27 22:27] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.460766] systemd-fstab-generator[11897]: Ignoring "noauto" for root device
	[  +5.630422] systemd-fstab-generator[12502]: Ignoring "noauto" for root device
	[  +0.475118] systemd-fstab-generator[12637]: Ignoring "noauto" for root device
	
	
	==> etcd [c07f15b168a6] <==
	{"level":"info","ts":"2024-08-27T22:27:30.157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-27T22:27:30.158Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-27T22:27:30.158Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T22:27:30.158Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-27T22:27:30.158Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-27T22:27:30.158Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T22:27:30.158Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-301000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:27:30.854Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:27:30.855Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-27T22:27:30.855Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:27:30.855Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T22:27:30.862Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T22:27:30.862Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:31:52 up 9 min,  0 users,  load average: 0.38, 0.46, 0.27
	Linux running-upgrade-301000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bf336df465bc] <==
	I0827 22:27:32.015181       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 22:27:32.033146       1 controller.go:611] quota admission added evaluator for: namespaces
	I0827 22:27:32.074245       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0827 22:27:32.074269       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0827 22:27:32.074279       1 cache.go:39] Caches are synced for autoregister controller
	I0827 22:27:32.074410       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0827 22:27:32.074532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 22:27:32.802840       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0827 22:27:32.977292       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0827 22:27:32.979764       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0827 22:27:32.979829       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0827 22:27:33.107117       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 22:27:33.116818       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0827 22:27:33.137388       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0827 22:27:33.139864       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0827 22:27:33.140239       1 controller.go:611] quota admission added evaluator for: endpoints
	I0827 22:27:33.141435       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 22:27:34.127719       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0827 22:27:34.696318       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0827 22:27:34.699673       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0827 22:27:34.717965       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0827 22:27:34.758420       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 22:27:47.632349       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0827 22:27:47.781719       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0827 22:27:48.174145       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [13a20142a2e0] <==
	I0827 22:27:46.945513       1 shared_informer.go:262] Caches are synced for PVC protection
	I0827 22:27:46.949769       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0827 22:27:46.952043       1 shared_informer.go:262] Caches are synced for persistent volume
	I0827 22:27:46.954156       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0827 22:27:46.977337       1 shared_informer.go:262] Caches are synced for deployment
	I0827 22:27:46.977337       1 shared_informer.go:262] Caches are synced for HPA
	I0827 22:27:46.978521       1 shared_informer.go:262] Caches are synced for job
	I0827 22:27:46.979673       1 shared_informer.go:262] Caches are synced for stateful set
	I0827 22:27:46.981786       1 shared_informer.go:262] Caches are synced for expand
	I0827 22:27:46.982593       1 shared_informer.go:262] Caches are synced for attach detach
	I0827 22:27:47.001149       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0827 22:27:47.119972       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0827 22:27:47.130038       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0827 22:27:47.135057       1 shared_informer.go:262] Caches are synced for resource quota
	I0827 22:27:47.180245       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0827 22:27:47.185412       1 shared_informer.go:262] Caches are synced for resource quota
	I0827 22:27:47.230038       1 shared_informer.go:262] Caches are synced for disruption
	I0827 22:27:47.230093       1 disruption.go:371] Sending events to api server.
	I0827 22:27:47.600949       1 shared_informer.go:262] Caches are synced for garbage collector
	I0827 22:27:47.635159       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-86l5x"
	I0827 22:27:47.642279       1 shared_informer.go:262] Caches are synced for garbage collector
	I0827 22:27:47.642874       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0827 22:27:47.783099       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0827 22:27:47.983105       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-68f49"
	I0827 22:27:47.985381       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2fxwm"
	
	
	==> kube-proxy [d1373e4a45ba] <==
	I0827 22:27:48.162863       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0827 22:27:48.162885       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0827 22:27:48.162895       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0827 22:27:48.172157       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0827 22:27:48.172167       1 server_others.go:206] "Using iptables Proxier"
	I0827 22:27:48.172181       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0827 22:27:48.172275       1 server.go:661] "Version info" version="v1.24.1"
	I0827 22:27:48.172279       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:27:48.172492       1 config.go:317] "Starting service config controller"
	I0827 22:27:48.172499       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0827 22:27:48.172506       1 config.go:226] "Starting endpoint slice config controller"
	I0827 22:27:48.172508       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0827 22:27:48.172726       1 config.go:444] "Starting node config controller"
	I0827 22:27:48.172727       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0827 22:27:48.273898       1 shared_informer.go:262] Caches are synced for node config
	I0827 22:27:48.274046       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0827 22:27:48.274052       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [81f2d02be406] <==
	W0827 22:27:32.033796       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0827 22:27:32.033807       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0827 22:27:32.033846       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0827 22:27:32.033857       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0827 22:27:32.033874       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0827 22:27:32.033883       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0827 22:27:32.033925       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:27:32.033936       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0827 22:27:32.033964       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:27:32.033976       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0827 22:27:32.034033       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0827 22:27:32.034045       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0827 22:27:32.034087       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0827 22:27:32.034102       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0827 22:27:32.859847       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 22:27:32.859901       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0827 22:27:32.889265       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0827 22:27:32.889302       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0827 22:27:32.934840       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 22:27:32.934864       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0827 22:27:32.974278       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 22:27:32.974446       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0827 22:27:33.029810       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:27:33.029933       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0827 22:27:33.130463       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-08-27 22:22:36 UTC, ends at Tue 2024-08-27 22:31:52 UTC. --
	Aug 27 22:27:35 running-upgrade-301000 kubelet[12508]: I0827 22:27:35.749485   12508 apiserver.go:52] "Watching apiserver"
	Aug 27 22:27:36 running-upgrade-301000 kubelet[12508]: I0827 22:27:36.163293   12508 reconciler.go:157] "Reconciler: start to sync state"
	Aug 27 22:27:36 running-upgrade-301000 kubelet[12508]: E0827 22:27:36.331204   12508 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-301000\" already exists" pod="kube-system/etcd-running-upgrade-301000"
	Aug 27 22:27:36 running-upgrade-301000 kubelet[12508]: E0827 22:27:36.534318   12508 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-301000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-301000"
	Aug 27 22:27:46 running-upgrade-301000 kubelet[12508]: I0827 22:27:46.918566   12508 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 22:27:46 running-upgrade-301000 kubelet[12508]: I0827 22:27:46.954337   12508 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 27 22:27:46 running-upgrade-301000 kubelet[12508]: I0827 22:27:46.954452   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cbcd6549-6552-45a5-8cb5-2db6caaad37c-tmp\") pod \"storage-provisioner\" (UID: \"cbcd6549-6552-45a5-8cb5-2db6caaad37c\") " pod="kube-system/storage-provisioner"
	Aug 27 22:27:46 running-upgrade-301000 kubelet[12508]: I0827 22:27:46.954465   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fl4\" (UniqueName: \"kubernetes.io/projected/cbcd6549-6552-45a5-8cb5-2db6caaad37c-kube-api-access-x8fl4\") pod \"storage-provisioner\" (UID: \"cbcd6549-6552-45a5-8cb5-2db6caaad37c\") " pod="kube-system/storage-provisioner"
	Aug 27 22:27:46 running-upgrade-301000 kubelet[12508]: I0827 22:27:46.954845   12508 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: E0827 22:27:47.058621   12508 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: E0827 22:27:47.058684   12508 projected.go:192] Error preparing data for projected volume kube-api-access-x8fl4 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: E0827 22:27:47.058730   12508 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/cbcd6549-6552-45a5-8cb5-2db6caaad37c-kube-api-access-x8fl4 podName:cbcd6549-6552-45a5-8cb5-2db6caaad37c nodeName:}" failed. No retries permitted until 2024-08-27 22:27:47.558708685 +0000 UTC m=+12.877744002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x8fl4" (UniqueName: "kubernetes.io/projected/cbcd6549-6552-45a5-8cb5-2db6caaad37c-kube-api-access-x8fl4") pod "storage-provisioner" (UID: "cbcd6549-6552-45a5-8cb5-2db6caaad37c") : configmap "kube-root-ca.crt" not found
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.640087   12508 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.757829   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef8d4c61-d567-4be7-8f84-673e074e3b1d-xtables-lock\") pod \"kube-proxy-86l5x\" (UID: \"ef8d4c61-d567-4be7-8f84-673e074e3b1d\") " pod="kube-system/kube-proxy-86l5x"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.757854   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef8d4c61-d567-4be7-8f84-673e074e3b1d-lib-modules\") pod \"kube-proxy-86l5x\" (UID: \"ef8d4c61-d567-4be7-8f84-673e074e3b1d\") " pod="kube-system/kube-proxy-86l5x"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.757871   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef8d4c61-d567-4be7-8f84-673e074e3b1d-kube-proxy\") pod \"kube-proxy-86l5x\" (UID: \"ef8d4c61-d567-4be7-8f84-673e074e3b1d\") " pod="kube-system/kube-proxy-86l5x"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.757881   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rltx\" (UniqueName: \"kubernetes.io/projected/ef8d4c61-d567-4be7-8f84-673e074e3b1d-kube-api-access-9rltx\") pod \"kube-proxy-86l5x\" (UID: \"ef8d4c61-d567-4be7-8f84-673e074e3b1d\") " pod="kube-system/kube-proxy-86l5x"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.988672   12508 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 22:27:47 running-upgrade-301000 kubelet[12508]: I0827 22:27:47.988748   12508 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 22:27:48 running-upgrade-301000 kubelet[12508]: I0827 22:27:48.060873   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k629r\" (UniqueName: \"kubernetes.io/projected/cf7abe42-095d-4798-9c82-aa82c829ea56-kube-api-access-k629r\") pod \"coredns-6d4b75cb6d-68f49\" (UID: \"cf7abe42-095d-4798-9c82-aa82c829ea56\") " pod="kube-system/coredns-6d4b75cb6d-68f49"
	Aug 27 22:27:48 running-upgrade-301000 kubelet[12508]: I0827 22:27:48.060928   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsjw6\" (UniqueName: \"kubernetes.io/projected/00c4c69d-a8ca-47d9-8b64-1d42bf4d3d4f-kube-api-access-jsjw6\") pod \"coredns-6d4b75cb6d-2fxwm\" (UID: \"00c4c69d-a8ca-47d9-8b64-1d42bf4d3d4f\") " pod="kube-system/coredns-6d4b75cb6d-2fxwm"
	Aug 27 22:27:48 running-upgrade-301000 kubelet[12508]: I0827 22:27:48.060941   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00c4c69d-a8ca-47d9-8b64-1d42bf4d3d4f-config-volume\") pod \"coredns-6d4b75cb6d-2fxwm\" (UID: \"00c4c69d-a8ca-47d9-8b64-1d42bf4d3d4f\") " pod="kube-system/coredns-6d4b75cb6d-2fxwm"
	Aug 27 22:27:48 running-upgrade-301000 kubelet[12508]: I0827 22:27:48.060953   12508 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf7abe42-095d-4798-9c82-aa82c829ea56-config-volume\") pod \"coredns-6d4b75cb6d-68f49\" (UID: \"cf7abe42-095d-4798-9c82-aa82c829ea56\") " pod="kube-system/coredns-6d4b75cb6d-68f49"
	Aug 27 22:31:37 running-upgrade-301000 kubelet[12508]: I0827 22:31:37.085902   12508 scope.go:110] "RemoveContainer" containerID="bacf943f78732b2b4ace282243361baa7c7b2b18336385d3de0adaa63e1b7862"
	Aug 27 22:31:37 running-upgrade-301000 kubelet[12508]: I0827 22:31:37.094883   12508 scope.go:110] "RemoveContainer" containerID="fb03113f9fbd72fe2bdbbf25a7ddcdbf7651f69b84d4794bf2095f85e3ac374b"
	
	
	==> storage-provisioner [d20687948062] <==
	I0827 22:27:48.037544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 22:27:48.046629       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 22:27:48.046666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 22:27:48.050245       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 22:27:48.051394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5c44de8-a0d3-4098-96f3-92c88cd193ea", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-301000_d3af3a3e-edea-4858-b4f2-90db842c38b2 became leader
	I0827 22:27:48.051456       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-301000_d3af3a3e-edea-4858-b4f2-90db842c38b2!
	I0827 22:27:48.152281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-301000_d3af3a3e-edea-4858-b4f2-90db842c38b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-301000 -n running-upgrade-301000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-301000 -n running-upgrade-301000: exit status 2 (15.585177959s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-301000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-301000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-301000
--- FAIL: TestRunningBinaryUpgrade (599.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-332000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-332000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.826547416s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-332000" primary control-plane node in "kubernetes-upgrade-332000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-332000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:25:08.856193    3862 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:25:08.856356    3862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:25:08.856359    3862 out.go:358] Setting ErrFile to fd 2...
	I0827 15:25:08.856362    3862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:25:08.856504    3862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:25:08.857604    3862 out.go:352] Setting JSON to false
	I0827 15:25:08.874134    3862 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3273,"bootTime":1724794235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:25:08.874204    3862 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:25:08.881417    3862 out.go:177] * [kubernetes-upgrade-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:25:08.889421    3862 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:25:08.889466    3862 notify.go:220] Checking for updates...
	I0827 15:25:08.897321    3862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:25:08.900439    3862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:25:08.903342    3862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:25:08.906393    3862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:25:08.909348    3862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:25:08.911068    3862 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:25:08.911138    3862 config.go:182] Loaded profile config "running-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:25:08.911190    3862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:25:08.915318    3862 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:25:08.922202    3862 start.go:297] selected driver: qemu2
	I0827 15:25:08.922209    3862 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:25:08.922215    3862 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:25:08.924646    3862 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:25:08.928308    3862 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:25:08.931450    3862 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 15:25:08.931495    3862 cni.go:84] Creating CNI manager for ""
	I0827 15:25:08.931505    3862 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0827 15:25:08.931540    3862 start.go:340] cluster config:
	{Name:kubernetes-upgrade-332000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:25:08.935435    3862 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:25:08.946349    3862 out.go:177] * Starting "kubernetes-upgrade-332000" primary control-plane node in "kubernetes-upgrade-332000" cluster
	I0827 15:25:08.950394    3862 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 15:25:08.950408    3862 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 15:25:08.950415    3862 cache.go:56] Caching tarball of preloaded images
	I0827 15:25:08.950476    3862 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:25:08.950481    3862 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0827 15:25:08.950536    3862 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kubernetes-upgrade-332000/config.json ...
	I0827 15:25:08.950547    3862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kubernetes-upgrade-332000/config.json: {Name:mk87fbde9bd46f4446532c43d308c53468986396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:25:08.950805    3862 start.go:360] acquireMachinesLock for kubernetes-upgrade-332000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:25:08.950842    3862 start.go:364] duration metric: took 28.708µs to acquireMachinesLock for "kubernetes-upgrade-332000"
	I0827 15:25:08.950854    3862 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:25:08.950882    3862 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:25:08.958435    3862 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:25:08.975168    3862 start.go:159] libmachine.API.Create for "kubernetes-upgrade-332000" (driver="qemu2")
	I0827 15:25:08.975190    3862 client.go:168] LocalClient.Create starting
	I0827 15:25:08.975247    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:25:08.975280    3862 main.go:141] libmachine: Decoding PEM data...
	I0827 15:25:08.975289    3862 main.go:141] libmachine: Parsing certificate...
	I0827 15:25:08.975337    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:25:08.975359    3862 main.go:141] libmachine: Decoding PEM data...
	I0827 15:25:08.975367    3862 main.go:141] libmachine: Parsing certificate...
	I0827 15:25:08.975837    3862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:25:09.135348    3862 main.go:141] libmachine: Creating SSH key...
	I0827 15:25:09.231169    3862 main.go:141] libmachine: Creating Disk image...
	I0827 15:25:09.231175    3862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:25:09.231429    3862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:09.241357    3862 main.go:141] libmachine: STDOUT: 
	I0827 15:25:09.241379    3862 main.go:141] libmachine: STDERR: 
	I0827 15:25:09.241451    3862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2 +20000M
	I0827 15:25:09.249752    3862 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:25:09.249770    3862 main.go:141] libmachine: STDERR: 
	I0827 15:25:09.249785    3862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:09.249791    3862 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:25:09.249802    3862 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:25:09.249829    3862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:35:8b:e0:99:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:09.251470    3862 main.go:141] libmachine: STDOUT: 
	I0827 15:25:09.251485    3862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:25:09.251504    3862 client.go:171] duration metric: took 276.319167ms to LocalClient.Create
	I0827 15:25:11.253629    3862 start.go:128] duration metric: took 2.302796917s to createHost
	I0827 15:25:11.253727    3862 start.go:83] releasing machines lock for "kubernetes-upgrade-332000", held for 2.302952959s
	W0827 15:25:11.253793    3862 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:25:11.264564    3862 out.go:177] * Deleting "kubernetes-upgrade-332000" in qemu2 ...
	W0827 15:25:11.287766    3862 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:25:11.287796    3862 start.go:729] Will try again in 5 seconds ...
	I0827 15:25:16.288878    3862 start.go:360] acquireMachinesLock for kubernetes-upgrade-332000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:25:16.289521    3862 start.go:364] duration metric: took 498.125µs to acquireMachinesLock for "kubernetes-upgrade-332000"
	I0827 15:25:16.289595    3862 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:25:16.289833    3862 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:25:16.295503    3862 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:25:16.348696    3862 start.go:159] libmachine.API.Create for "kubernetes-upgrade-332000" (driver="qemu2")
	I0827 15:25:16.348747    3862 client.go:168] LocalClient.Create starting
	I0827 15:25:16.348873    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:25:16.348943    3862 main.go:141] libmachine: Decoding PEM data...
	I0827 15:25:16.348960    3862 main.go:141] libmachine: Parsing certificate...
	I0827 15:25:16.349020    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:25:16.349066    3862 main.go:141] libmachine: Decoding PEM data...
	I0827 15:25:16.349080    3862 main.go:141] libmachine: Parsing certificate...
	I0827 15:25:16.349684    3862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:25:16.514120    3862 main.go:141] libmachine: Creating SSH key...
	I0827 15:25:16.587003    3862 main.go:141] libmachine: Creating Disk image...
	I0827 15:25:16.587009    3862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:25:16.587266    3862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:16.596848    3862 main.go:141] libmachine: STDOUT: 
	I0827 15:25:16.596863    3862 main.go:141] libmachine: STDERR: 
	I0827 15:25:16.596910    3862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2 +20000M
	I0827 15:25:16.604904    3862 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:25:16.604922    3862 main.go:141] libmachine: STDERR: 
	I0827 15:25:16.604935    3862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:16.604940    3862 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:25:16.604949    3862 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:25:16.604988    3862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:99:44:28:a5:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:16.606646    3862 main.go:141] libmachine: STDOUT: 
	I0827 15:25:16.606661    3862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:25:16.606675    3862 client.go:171] duration metric: took 257.928542ms to LocalClient.Create
	I0827 15:25:18.608831    3862 start.go:128] duration metric: took 2.319033125s to createHost
	I0827 15:25:18.608938    3862 start.go:83] releasing machines lock for "kubernetes-upgrade-332000", held for 2.319467333s
	W0827 15:25:18.609343    3862 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:25:18.618926    3862 out.go:201] 
	W0827 15:25:18.626941    3862 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:25:18.626977    3862 out.go:270] * 
	* 
	W0827 15:25:18.629594    3862 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:25:18.638922    3862 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-332000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-332000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-332000: (2.027585417s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-332000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-332000 status --format={{.Host}}: exit status 7 (43.182166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-332000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-332000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182272292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-332000" primary control-plane node in "kubernetes-upgrade-332000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:25:20.757345    3891 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:25:20.757499    3891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:25:20.757503    3891 out.go:358] Setting ErrFile to fd 2...
	I0827 15:25:20.757505    3891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:25:20.757648    3891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:25:20.758891    3891 out.go:352] Setting JSON to false
	I0827 15:25:20.777746    3891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3285,"bootTime":1724794235,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:25:20.777842    3891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:25:20.782510    3891 out.go:177] * [kubernetes-upgrade-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:25:20.789446    3891 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:25:20.789505    3891 notify.go:220] Checking for updates...
	I0827 15:25:20.795356    3891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:25:20.798363    3891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:25:20.802409    3891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:25:20.805345    3891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:25:20.808417    3891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:25:20.811676    3891 config.go:182] Loaded profile config "kubernetes-upgrade-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0827 15:25:20.811929    3891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:25:20.814233    3891 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:25:20.821435    3891 start.go:297] selected driver: qemu2
	I0827 15:25:20.821446    3891 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:25:20.821505    3891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:25:20.824220    3891 cni.go:84] Creating CNI manager for ""
	I0827 15:25:20.824242    3891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:25:20.824281    3891 start.go:340] cluster config:
	{Name:kubernetes-upgrade-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-332000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:25:20.828179    3891 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:25:20.832369    3891 out.go:177] * Starting "kubernetes-upgrade-332000" primary control-plane node in "kubernetes-upgrade-332000" cluster
	I0827 15:25:20.840390    3891 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:25:20.840424    3891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:25:20.840434    3891 cache.go:56] Caching tarball of preloaded images
	I0827 15:25:20.840520    3891 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:25:20.840527    3891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:25:20.840583    3891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kubernetes-upgrade-332000/config.json ...
	I0827 15:25:20.840949    3891 start.go:360] acquireMachinesLock for kubernetes-upgrade-332000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:25:20.840982    3891 start.go:364] duration metric: took 24.667µs to acquireMachinesLock for "kubernetes-upgrade-332000"
	I0827 15:25:20.840991    3891 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:25:20.840997    3891 fix.go:54] fixHost starting: 
	I0827 15:25:20.841121    3891 fix.go:112] recreateIfNeeded on kubernetes-upgrade-332000: state=Stopped err=<nil>
	W0827 15:25:20.841130    3891 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:25:20.845438    3891 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-332000" ...
	I0827 15:25:20.853335    3891 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:25:20.853382    3891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:99:44:28:a5:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:20.855611    3891 main.go:141] libmachine: STDOUT: 
	I0827 15:25:20.855632    3891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:25:20.855662    3891 fix.go:56] duration metric: took 14.665334ms for fixHost
	I0827 15:25:20.855668    3891 start.go:83] releasing machines lock for "kubernetes-upgrade-332000", held for 14.682417ms
	W0827 15:25:20.855676    3891 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:25:20.855725    3891 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:25:20.855729    3891 start.go:729] Will try again in 5 seconds ...
	I0827 15:25:25.856383    3891 start.go:360] acquireMachinesLock for kubernetes-upgrade-332000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:25:25.856671    3891 start.go:364] duration metric: took 245.167µs to acquireMachinesLock for "kubernetes-upgrade-332000"
	I0827 15:25:25.856748    3891 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:25:25.856759    3891 fix.go:54] fixHost starting: 
	I0827 15:25:25.857169    3891 fix.go:112] recreateIfNeeded on kubernetes-upgrade-332000: state=Stopped err=<nil>
	W0827 15:25:25.857184    3891 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:25:25.866424    3891 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-332000" ...
	I0827 15:25:25.870423    3891 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:25:25.870500    3891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:99:44:28:a5:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubernetes-upgrade-332000/disk.qcow2
	I0827 15:25:25.876463    3891 main.go:141] libmachine: STDOUT: 
	I0827 15:25:25.876523    3891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:25:25.876576    3891 fix.go:56] duration metric: took 19.817583ms for fixHost
	I0827 15:25:25.876589    3891 start.go:83] releasing machines lock for "kubernetes-upgrade-332000", held for 19.904583ms
	W0827 15:25:25.876689    3891 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:25:25.884371    3891 out.go:201] 
	W0827 15:25:25.887529    3891 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:25:25.887542    3891 out.go:270] * 
	* 
	W0827 15:25:25.888696    3891 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:25:25.898424    3891 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-332000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-332000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-332000 version --output=json: exit status 1 (53.207333ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-332000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-27 15:25:25.963207 -0700 PDT m=+2934.650502293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-332000 -n kubernetes-upgrade-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-332000 -n kubernetes-upgrade-332000: exit status 7 (33.180166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-332000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-332000
--- FAIL: TestKubernetesUpgrade (17.25s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.48s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19522
- KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2755855241/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.48s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19522
- KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current909132449/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1303320678 start -p stopped-upgrade-443000 --memory=2200 --vm-driver=qemu2 
E0827 15:25:44.728977    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1303320678 start -p stopped-upgrade-443000 --memory=2200 --vm-driver=qemu2 : (40.157567917s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1303320678 -p stopped-upgrade-443000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1303320678 -p stopped-upgrade-443000 stop: (12.094067042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-443000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0827 15:29:24.509260    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 15:30:44.719394    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-443000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.600211s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-443000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-443000" primary control-plane node in "stopped-upgrade-443000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-443000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:26:19.418906    3939 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:26:19.419029    3939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:26:19.419032    3939 out.go:358] Setting ErrFile to fd 2...
	I0827 15:26:19.419034    3939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:26:19.419170    3939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:26:19.420383    3939 out.go:352] Setting JSON to false
	I0827 15:26:19.437850    3939 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3344,"bootTime":1724794235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:26:19.437923    3939 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:26:19.442529    3939 out.go:177] * [stopped-upgrade-443000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:26:19.450523    3939 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:26:19.450568    3939 notify.go:220] Checking for updates...
	I0827 15:26:19.460497    3939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:26:19.464580    3939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:26:19.468516    3939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:26:19.471606    3939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:26:19.474550    3939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:26:19.477800    3939 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:26:19.480520    3939 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0827 15:26:19.483578    3939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:26:19.486525    3939 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:26:19.493557    3939 start.go:297] selected driver: qemu2
	I0827 15:26:19.493563    3939 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:26:19.493616    3939 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:26:19.496256    3939 cni.go:84] Creating CNI manager for ""
	I0827 15:26:19.496279    3939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:26:19.496301    3939 start.go:340] cluster config:
	{Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:26:19.496353    3939 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:26:19.504560    3939 out.go:177] * Starting "stopped-upgrade-443000" primary control-plane node in "stopped-upgrade-443000" cluster
	I0827 15:26:19.508395    3939 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0827 15:26:19.508410    3939 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0827 15:26:19.508415    3939 cache.go:56] Caching tarball of preloaded images
	I0827 15:26:19.508471    3939 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:26:19.508477    3939 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0827 15:26:19.508524    3939 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0827 15:26:19.508990    3939 start.go:360] acquireMachinesLock for stopped-upgrade-443000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:26:19.509026    3939 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "stopped-upgrade-443000"
	I0827 15:26:19.509036    3939 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:26:19.509043    3939 fix.go:54] fixHost starting: 
	I0827 15:26:19.509147    3939 fix.go:112] recreateIfNeeded on stopped-upgrade-443000: state=Stopped err=<nil>
	W0827 15:26:19.509156    3939 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:26:19.517336    3939 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-443000" ...
	I0827 15:26:19.521500    3939 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:26:19.521565    3939 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50458-:22,hostfwd=tcp::50459-:2376,hostname=stopped-upgrade-443000 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/disk.qcow2
	I0827 15:26:19.568051    3939 main.go:141] libmachine: STDOUT: 
	I0827 15:26:19.568083    3939 main.go:141] libmachine: STDERR: 
	I0827 15:26:19.568089    3939 main.go:141] libmachine: Waiting for VM to start (ssh -p 50458 docker@127.0.0.1)...
	I0827 15:26:39.631415    3939 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/config.json ...
	I0827 15:26:39.631908    3939 machine.go:93] provisionDockerMachine start ...
	I0827 15:26:39.631992    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:39.632283    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:39.632294    3939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 15:26:39.710834    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0827 15:26:39.710863    3939 buildroot.go:166] provisioning hostname "stopped-upgrade-443000"
	I0827 15:26:39.710962    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:39.711135    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:39.711144    3939 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-443000 && echo "stopped-upgrade-443000" | sudo tee /etc/hostname
	I0827 15:26:39.781661    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-443000
	
	I0827 15:26:39.781718    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:39.781851    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:39.781861    3939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-443000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-443000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-443000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 15:26:39.848386    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 15:26:39.848400    3939 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19522-983/.minikube CaCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19522-983/.minikube}
	I0827 15:26:39.848413    3939 buildroot.go:174] setting up certificates
	I0827 15:26:39.848419    3939 provision.go:84] configureAuth start
	I0827 15:26:39.848426    3939 provision.go:143] copyHostCerts
	I0827 15:26:39.848508    3939 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem, removing ...
	I0827 15:26:39.848515    3939 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem
	I0827 15:26:39.849149    3939 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/ca.pem (1078 bytes)
	I0827 15:26:39.849347    3939 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem, removing ...
	I0827 15:26:39.849351    3939 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem
	I0827 15:26:39.849408    3939 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/cert.pem (1123 bytes)
	I0827 15:26:39.849520    3939 exec_runner.go:144] found /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem, removing ...
	I0827 15:26:39.849523    3939 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem
	I0827 15:26:39.849575    3939 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19522-983/.minikube/key.pem (1675 bytes)
	I0827 15:26:39.849663    3939 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-443000 san=[127.0.0.1 localhost minikube stopped-upgrade-443000]
	I0827 15:26:39.966813    3939 provision.go:177] copyRemoteCerts
	I0827 15:26:39.966864    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 15:26:39.966874    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:26:40.003890    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 15:26:40.011503    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0827 15:26:40.019027    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 15:26:40.025850    3939 provision.go:87] duration metric: took 177.430875ms to configureAuth
	I0827 15:26:40.025862    3939 buildroot.go:189] setting minikube options for container-runtime
	I0827 15:26:40.025998    3939 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:26:40.026033    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.026121    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.026128    3939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0827 15:26:40.089686    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0827 15:26:40.089696    3939 buildroot.go:70] root file system type: tmpfs
	I0827 15:26:40.089762    3939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0827 15:26:40.089817    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.089948    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.089981    3939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0827 15:26:40.156708    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0827 15:26:40.156767    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.156895    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.156905    3939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0827 15:26:40.507042    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0827 15:26:40.507054    3939 machine.go:96] duration metric: took 875.165792ms to provisionDockerMachine
	I0827 15:26:40.507065    3939 start.go:293] postStartSetup for "stopped-upgrade-443000" (driver="qemu2")
	I0827 15:26:40.507072    3939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 15:26:40.507122    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 15:26:40.507131    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:26:40.539863    3939 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 15:26:40.541177    3939 info.go:137] Remote host: Buildroot 2021.02.12
	I0827 15:26:40.541185    3939 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19522-983/.minikube/addons for local assets ...
	I0827 15:26:40.541279    3939 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19522-983/.minikube/files for local assets ...
	I0827 15:26:40.541400    3939 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem -> 14632.pem in /etc/ssl/certs
	I0827 15:26:40.541530    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 15:26:40.544323    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem --> /etc/ssl/certs/14632.pem (1708 bytes)
	I0827 15:26:40.551521    3939 start.go:296] duration metric: took 44.452042ms for postStartSetup
	I0827 15:26:40.551534    3939 fix.go:56] duration metric: took 21.04318675s for fixHost
	I0827 15:26:40.551567    3939 main.go:141] libmachine: Using SSH client type: native
	I0827 15:26:40.551676    3939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 50458 <nil> <nil>}
	I0827 15:26:40.551684    3939 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 15:26:40.616894    3939 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797600.242501879
	
	I0827 15:26:40.616902    3939 fix.go:216] guest clock: 1724797600.242501879
	I0827 15:26:40.616906    3939 fix.go:229] Guest: 2024-08-27 15:26:40.242501879 -0700 PDT Remote: 2024-08-27 15:26:40.551536 -0700 PDT m=+21.152239501 (delta=-309.034121ms)
	I0827 15:26:40.616917    3939 fix.go:200] guest clock delta is within tolerance: -309.034121ms
	I0827 15:26:40.616919    3939 start.go:83] releasing machines lock for "stopped-upgrade-443000", held for 21.108583209s
	I0827 15:26:40.616984    3939 ssh_runner.go:195] Run: cat /version.json
	I0827 15:26:40.616994    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:26:40.616985    3939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 15:26:40.617030    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	W0827 15:26:40.617613    3939 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50458: connect: connection refused
	I0827 15:26:40.617634    3939 retry.go:31] will retry after 181.821517ms: dial tcp [::1]:50458: connect: connection refused
	W0827 15:26:40.649123    3939 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0827 15:26:40.649171    3939 ssh_runner.go:195] Run: systemctl --version
	I0827 15:26:40.650951    3939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 15:26:40.652482    3939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 15:26:40.652505    3939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0827 15:26:40.655480    3939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0827 15:26:40.659961    3939 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 15:26:40.659970    3939 start.go:495] detecting cgroup driver to use...
	I0827 15:26:40.660048    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 15:26:40.667095    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0827 15:26:40.670587    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 15:26:40.673520    3939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 15:26:40.673544    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 15:26:40.676453    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 15:26:40.680101    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 15:26:40.683662    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 15:26:40.687130    3939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 15:26:40.690067    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 15:26:40.692883    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0827 15:26:40.696157    3939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0827 15:26:40.699527    3939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 15:26:40.702327    3939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 15:26:40.704856    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:40.787582    3939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 15:26:40.798573    3939 start.go:495] detecting cgroup driver to use...
	I0827 15:26:40.798650    3939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0827 15:26:40.807557    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 15:26:40.812123    3939 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 15:26:40.823262    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 15:26:40.827709    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 15:26:40.832784    3939 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0827 15:26:40.882518    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 15:26:40.888170    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 15:26:40.893857    3939 ssh_runner.go:195] Run: which cri-dockerd
	I0827 15:26:40.894974    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0827 15:26:40.897347    3939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0827 15:26:40.902078    3939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0827 15:26:40.979468    3939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0827 15:26:41.059983    3939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0827 15:26:41.060052    3939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0827 15:26:41.065700    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:41.145939    3939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 15:26:42.306544    3939 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160626667s)
	I0827 15:26:42.306619    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0827 15:26:42.311919    3939 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0827 15:26:42.318525    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 15:26:42.323160    3939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0827 15:26:42.397751    3939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0827 15:26:42.479555    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:42.553896    3939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0827 15:26:42.559760    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 15:26:42.564684    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:42.643463    3939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0827 15:26:42.681846    3939 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0827 15:26:42.681924    3939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0827 15:26:42.684870    3939 start.go:563] Will wait 60s for crictl version
	I0827 15:26:42.684929    3939 ssh_runner.go:195] Run: which crictl
	I0827 15:26:42.686811    3939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 15:26:42.704959    3939 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0827 15:26:42.705028    3939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 15:26:42.721202    3939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 15:26:42.741691    3939 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0827 15:26:42.741820    3939 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0827 15:26:42.743125    3939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 15:26:42.747061    3939 kubeadm.go:883] updating cluster {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0827 15:26:42.747104    3939 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0827 15:26:42.747142    3939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 15:26:42.757588    3939 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 15:26:42.757612    3939 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0827 15:26:42.757656    3939 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0827 15:26:42.760588    3939 ssh_runner.go:195] Run: which lz4
	I0827 15:26:42.761910    3939 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 15:26:42.763122    3939 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 15:26:42.763132    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0827 15:26:43.746203    3939 docker.go:649] duration metric: took 984.3615ms to copy over tarball
	I0827 15:26:43.746264    3939 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 15:26:44.912968    3939 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.166727875s)
	I0827 15:26:44.912981    3939 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 15:26:44.928468    3939 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0827 15:26:44.931979    3939 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0827 15:26:44.937277    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:45.015173    3939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 15:26:46.604960    3939 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.589822541s)
	I0827 15:26:46.605058    3939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 15:26:46.618982    3939 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 15:26:46.618991    3939 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0827 15:26:46.618996    3939 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0827 15:26:46.624516    3939 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:46.627126    3939 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:46.628705    3939 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:46.629018    3939 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:46.630833    3939 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:46.630858    3939 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:46.632165    3939 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:46.632201    3939 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:46.634319    3939 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:46.634354    3939 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:46.634373    3939 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:46.635540    3939 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0827 15:26:46.636314    3939 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:46.636348    3939 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:46.638539    3939 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:46.638569    3939 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W0827 15:26:47.367345    3939 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0827 15:26:47.367749    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:47.396367    3939 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0827 15:26:47.396425    3939 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:47.396524    3939 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:26:47.417997    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0827 15:26:47.418143    3939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0827 15:26:47.419994    3939 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0827 15:26:47.420007    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0827 15:26:47.451399    3939 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0827 15:26:47.451413    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0827 15:26:47.561447    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:47.602846    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:47.610770    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:47.622958    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:47.706165    3939 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0827 15:26:47.706225    3939 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0827 15:26:47.706245    3939 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:47.706247    3939 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0827 15:26:47.706259    3939 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:47.706297    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0827 15:26:47.706297    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0827 15:26:47.706344    3939 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0827 15:26:47.706386    3939 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:47.706404    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0827 15:26:47.706358    3939 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0827 15:26:47.706457    3939 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:47.706475    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0827 15:26:47.723932    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0827 15:26:47.728469    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0827 15:26:47.736244    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0827 15:26:47.736259    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0827 15:26:47.776591    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:47.780457    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0827 15:26:47.787921    3939 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0827 15:26:47.788046    3939 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0827 15:26:47.788065    3939 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:47.788087    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0827 15:26:47.788049    3939 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:47.794991    3939 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0827 15:26:47.795014    3939 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0827 15:26:47.795069    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0827 15:26:47.800992    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0827 15:26:47.802664    3939 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0827 15:26:47.802688    3939 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:47.802744    3939 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0827 15:26:47.816033    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0827 15:26:47.816072    3939 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0827 15:26:47.816163    3939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0827 15:26:47.816169    3939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0827 15:26:47.817877    3939 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0827 15:26:47.817895    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0827 15:26:47.818142    3939 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0827 15:26:47.818154    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0827 15:26:47.840181    3939 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0827 15:26:47.840196    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0827 15:26:47.883889    3939 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0827 15:26:47.883917    3939 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0827 15:26:47.883924    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0827 15:26:47.920412    3939 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0827 15:26:47.920457    3939 cache_images.go:92] duration metric: took 1.301497875s to LoadCachedImages
	W0827 15:26:47.920510    3939 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0827 15:26:47.920518    3939 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0827 15:26:47.920586    3939 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-443000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 15:26:47.920694    3939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0827 15:26:47.935513    3939 cni.go:84] Creating CNI manager for ""
	I0827 15:26:47.935526    3939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:26:47.935531    3939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 15:26:47.935539    3939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-443000 NodeName:stopped-upgrade-443000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 15:26:47.935612    3939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-443000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 15:26:47.935681    3939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0827 15:26:47.939387    3939 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 15:26:47.939438    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 15:26:47.942367    3939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0827 15:26:47.947557    3939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 15:26:47.952852    3939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0827 15:26:47.958596    3939 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0827 15:26:47.960051    3939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 15:26:47.964618    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:26:48.041647    3939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:26:48.047368    3939 certs.go:68] Setting up /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000 for IP: 10.0.2.15
	I0827 15:26:48.047381    3939 certs.go:194] generating shared ca certs ...
	I0827 15:26:48.047390    3939 certs.go:226] acquiring lock for ca certs: {Name:mkc3f4287026c100ff774c65b8333a833cfe8f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.047568    3939 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19522-983/.minikube/ca.key
	I0827 15:26:48.047625    3939 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.key
	I0827 15:26:48.047633    3939 certs.go:256] generating profile certs ...
	I0827 15:26:48.047717    3939 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.key
	I0827 15:26:48.047738    3939 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4
	I0827 15:26:48.047751    3939 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0827 15:26:48.155771    3939 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 ...
	I0827 15:26:48.155786    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4: {Name:mk9e9e95b75e538296521b4b4d1d83521f1d6e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.156105    3939 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 ...
	I0827 15:26:48.156110    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4: {Name:mk0380fe0088fdd2112c3f42dffcefaab127de8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.156246    3939 certs.go:381] copying /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt.e1b87fa4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt
	I0827 15:26:48.156923    3939 certs.go:385] copying /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key.e1b87fa4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key
	I0827 15:26:48.157095    3939 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/proxy-client.key
	I0827 15:26:48.157236    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463.pem (1338 bytes)
	W0827 15:26:48.157266    3939 certs.go:480] ignoring /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463_empty.pem, impossibly tiny 0 bytes
	I0827 15:26:48.157272    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca-key.pem (1679 bytes)
	I0827 15:26:48.157295    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem (1078 bytes)
	I0827 15:26:48.157314    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem (1123 bytes)
	I0827 15:26:48.157332    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/certs/key.pem (1675 bytes)
	I0827 15:26:48.157374    3939 certs.go:484] found cert: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem (1708 bytes)
	I0827 15:26:48.157714    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 15:26:48.164561    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0827 15:26:48.171981    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 15:26:48.179345    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0827 15:26:48.186648    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0827 15:26:48.193505    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 15:26:48.200227    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 15:26:48.207493    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 15:26:48.214882    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/certs/1463.pem --> /usr/share/ca-certificates/1463.pem (1338 bytes)
	I0827 15:26:48.221671    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/ssl/certs/14632.pem --> /usr/share/ca-certificates/14632.pem (1708 bytes)
	I0827 15:26:48.228230    3939 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 15:26:48.235294    3939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 15:26:48.240291    3939 ssh_runner.go:195] Run: openssl version
	I0827 15:26:48.242152    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 15:26:48.244887    3939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:26:48.246414    3939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:26:48.246436    3939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 15:26:48.248081    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 15:26:48.251245    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463.pem && ln -fs /usr/share/ca-certificates/1463.pem /etc/ssl/certs/1463.pem"
	I0827 15:26:48.254217    3939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463.pem
	I0827 15:26:48.255509    3939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 21:43 /usr/share/ca-certificates/1463.pem
	I0827 15:26:48.255526    3939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463.pem
	I0827 15:26:48.257247    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1463.pem /etc/ssl/certs/51391683.0"
	I0827 15:26:48.260319    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14632.pem && ln -fs /usr/share/ca-certificates/14632.pem /etc/ssl/certs/14632.pem"
	I0827 15:26:48.264064    3939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14632.pem
	I0827 15:26:48.265531    3939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 21:43 /usr/share/ca-certificates/14632.pem
	I0827 15:26:48.265549    3939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14632.pem
	I0827 15:26:48.267192    3939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14632.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 15:26:48.270365    3939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 15:26:48.271736    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 15:26:48.273697    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 15:26:48.275352    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 15:26:48.277328    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 15:26:48.279045    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 15:26:48.280842    3939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 15:26:48.282615    3939 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-443000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50493 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-443000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0827 15:26:48.282679    3939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 15:26:48.292660    3939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 15:26:48.295839    3939 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0827 15:26:48.295845    3939 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0827 15:26:48.295864    3939 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0827 15:26:48.299031    3939 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0827 15:26:48.299331    3939 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-443000" does not appear in /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:26:48.299426    3939 kubeconfig.go:62] /Users/jenkins/minikube-integration/19522-983/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-443000" cluster setting kubeconfig missing "stopped-upgrade-443000" context setting]
	I0827 15:26:48.299662    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:26:48.300205    3939 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fdbeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:26:48.300536    3939 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0827 15:26:48.303277    3939 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-443000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0827 15:26:48.303283    3939 kubeadm.go:1160] stopping kube-system containers ...
	I0827 15:26:48.303319    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 15:26:48.313900    3939 docker.go:483] Stopping containers: [165d46598547 f4d3cadbd368 9cd919fac506 2185b7616386 a9f742447589 585e47bfe28a cb4c8257b0f2 69c30e03f3a6]
	I0827 15:26:48.313969    3939 ssh_runner.go:195] Run: docker stop 165d46598547 f4d3cadbd368 9cd919fac506 2185b7616386 a9f742447589 585e47bfe28a cb4c8257b0f2 69c30e03f3a6
	I0827 15:26:48.324734    3939 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0827 15:26:48.330541    3939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:26:48.333336    3939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 15:26:48.333343    3939 kubeadm.go:157] found existing configuration files:
	
	I0827 15:26:48.333369    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf
	I0827 15:26:48.336045    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 15:26:48.336072    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:26:48.339153    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf
	I0827 15:26:48.341943    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 15:26:48.341968    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:26:48.344472    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf
	I0827 15:26:48.347452    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 15:26:48.347472    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:26:48.350278    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf
	I0827 15:26:48.352621    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 15:26:48.352640    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:26:48.355802    3939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:26:48.358921    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.382025    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.847475    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.978724    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:48.998860    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0827 15:26:49.027264    3939 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:26:49.027352    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:26:49.529502    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:26:50.029390    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:26:50.033570    3939 api_server.go:72] duration metric: took 1.006341375s to wait for apiserver process to appear ...
	I0827 15:26:50.033581    3939 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:26:50.033591    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:26:55.035613    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:26:55.035640    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:00.035764    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:00.035818    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:05.036072    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:05.036137    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:10.036550    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:10.036607    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:15.037237    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:15.037300    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:20.038306    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:20.038347    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:25.039404    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:25.039445    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:30.040801    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:30.040828    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:35.042186    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:35.042211    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:40.044205    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:40.044234    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:45.046255    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:45.046273    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:50.048250    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:50.048398    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:50.059300    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:27:50.059374    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:50.070162    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:27:50.070232    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:50.080812    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:27:50.080882    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:50.090998    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:27:50.091074    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:50.101614    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:27:50.101686    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:50.111915    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:27:50.111987    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:50.122576    3939 logs.go:276] 0 containers: []
	W0827 15:27:50.122587    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:50.122641    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:50.133075    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:27:50.133098    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:27:50.133104    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:27:50.150350    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:50.150359    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:50.188512    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:50.188535    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:27:50.270443    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:27:50.270458    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:27:50.282113    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:27:50.282124    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:27:50.295536    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:27:50.295548    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:27:50.307366    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:27:50.307380    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:27:50.319099    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:50.319111    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:50.323112    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:27:50.323121    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:27:50.363425    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:27:50.363436    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:27:50.377545    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:27:50.377555    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:27:50.394530    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:50.394545    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:50.419945    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:27:50.419956    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:50.431596    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:27:50.431610    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:27:50.452097    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:27:50.452110    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:27:50.463409    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:27:50.463424    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:27:52.975527    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:27:57.976675    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:27:57.976910    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:27:57.996055    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:27:57.996137    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:27:58.010527    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:27:58.010611    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:27:58.022381    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:27:58.022460    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:27:58.033015    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:27:58.033084    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:27:58.043370    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:27:58.043437    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:27:58.053583    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:27:58.053649    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:27:58.064032    3939 logs.go:276] 0 containers: []
	W0827 15:27:58.064044    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:27:58.064102    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:27:58.074836    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:27:58.074854    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:27:58.074861    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:27:58.086308    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:27:58.086319    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:27:58.098549    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:27:58.098561    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:27:58.111071    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:27:58.111082    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:27:58.129005    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:27:58.129016    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:27:58.140780    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:27:58.140791    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:27:58.145619    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:27:58.145625    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:27:58.184301    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:27:58.184312    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:27:58.195220    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:27:58.195230    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:27:58.209246    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:27:58.209256    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:27:58.221840    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:27:58.221851    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:27:58.233684    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:27:58.233695    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:27:58.270707    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:27:58.270716    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:27:58.284846    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:27:58.284857    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:27:58.298548    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:27:58.298560    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:27:58.324381    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:27:58.324392    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:00.860788    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:05.862912    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:05.863123    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:05.883363    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:05.883454    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:05.896772    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:05.896847    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:05.911511    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:05.911579    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:05.921941    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:05.922021    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:05.932558    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:05.932622    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:05.942967    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:05.943027    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:05.952884    3939 logs.go:276] 0 containers: []
	W0827 15:28:05.952896    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:05.952957    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:05.963660    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:05.963676    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:05.963682    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:05.978351    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:05.978364    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:05.989706    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:05.989717    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:06.003745    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:06.003758    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:06.015379    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:06.015390    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:06.027310    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:06.027324    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:06.032315    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:06.032324    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:06.068110    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:06.068125    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:06.080253    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:06.080264    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:06.102573    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:06.102586    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:06.116663    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:06.116673    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:06.128661    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:06.128672    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:06.140353    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:06.140364    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:06.177502    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:06.177510    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:06.215775    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:06.215787    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:06.229735    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:06.229749    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:08.756934    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:13.759019    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:13.759204    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:13.775251    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:13.775338    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:13.789049    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:13.789121    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:13.800000    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:13.800071    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:13.810302    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:13.810367    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:13.820395    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:13.820470    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:13.830939    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:13.831005    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:13.841297    3939 logs.go:276] 0 containers: []
	W0827 15:28:13.841308    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:13.841366    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:13.852101    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:13.852120    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:13.852125    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:13.856341    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:13.856348    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:13.867988    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:13.867997    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:13.893015    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:13.893025    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:13.932076    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:13.932084    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:13.946067    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:13.946077    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:13.959818    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:13.959829    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:13.971048    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:13.971058    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:13.990989    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:13.990998    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:14.005386    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:14.005399    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:14.023284    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:14.023298    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:14.035956    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:14.035965    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:14.076429    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:14.076440    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:14.114740    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:14.114752    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:14.126050    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:14.126061    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:14.138047    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:14.138058    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:16.651990    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:21.652701    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:21.653117    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:21.699880    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:21.699982    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:21.716084    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:21.716186    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:21.729065    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:21.729137    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:21.739984    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:21.740059    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:21.751742    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:21.751815    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:21.762971    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:21.763042    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:21.773960    3939 logs.go:276] 0 containers: []
	W0827 15:28:21.773971    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:21.774029    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:21.788335    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:21.788352    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:21.788358    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:21.800673    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:21.800687    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:21.813648    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:21.813659    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:21.828415    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:21.828424    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:21.840643    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:21.840657    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:21.852898    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:21.852909    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:21.877467    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:21.877480    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:21.900891    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:21.900899    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:21.912822    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:21.912835    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:21.931623    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:21.931633    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:21.970185    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:21.970195    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:22.004565    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:22.004580    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:22.042800    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:22.042810    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:22.060411    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:22.060421    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:22.065065    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:22.065075    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:22.083046    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:22.083056    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:24.597103    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:29.599300    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:29.599654    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:29.628173    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:29.628310    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:29.646563    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:29.646653    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:29.660519    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:29.660590    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:29.672903    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:29.672976    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:29.686555    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:29.686621    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:29.697313    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:29.697385    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:29.707199    3939 logs.go:276] 0 containers: []
	W0827 15:28:29.707209    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:29.707274    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:29.717604    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:29.717621    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:29.717626    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:29.721963    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:29.721969    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:29.760265    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:29.760283    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:29.772990    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:29.773005    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:29.788504    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:29.788515    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:29.827097    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:29.827105    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:29.841341    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:29.841351    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:29.853351    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:29.853359    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:29.877920    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:29.877933    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:29.889289    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:29.889299    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:29.907650    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:29.907661    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:29.919395    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:29.919407    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:29.954451    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:29.954462    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:29.968298    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:29.968313    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:29.982855    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:29.982865    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:29.994963    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:29.994975    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:32.509289    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:37.511800    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:37.512237    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:37.544754    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:37.544893    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:37.565665    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:37.565765    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:37.580528    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:37.580599    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:37.592687    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:37.592760    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:37.603706    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:37.603775    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:37.614523    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:37.614588    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:37.629318    3939 logs.go:276] 0 containers: []
	W0827 15:28:37.629330    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:37.629389    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:37.640077    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:37.640097    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:37.640102    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:37.656514    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:37.656525    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:37.675684    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:37.675695    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:37.688671    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:37.688681    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:37.700576    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:37.700590    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:37.721055    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:37.721069    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:37.733858    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:37.733872    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:37.745476    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:37.745489    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:37.749533    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:37.749539    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:37.761126    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:37.761136    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:37.785618    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:37.785629    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:37.824849    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:37.824871    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:37.862643    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:37.862657    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:37.902105    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:37.902117    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:37.917078    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:37.917089    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:37.929415    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:37.929426    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:40.442957    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:45.445525    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:45.445805    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:45.477859    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:45.477981    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:45.499381    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:45.499484    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:45.513914    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:45.513987    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:45.528298    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:45.528372    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:45.538969    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:45.539045    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:45.553596    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:45.553663    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:45.565918    3939 logs.go:276] 0 containers: []
	W0827 15:28:45.565930    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:45.565991    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:45.576647    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:45.576666    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:45.576672    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:45.589985    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:45.589997    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:45.602298    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:45.602309    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:45.618075    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:45.618087    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:45.632481    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:45.632491    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:45.671314    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:45.671327    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:45.689700    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:45.689713    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:45.726182    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:45.726193    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:45.739973    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:45.739986    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:45.744566    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:45.744576    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:45.782504    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:45.782516    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:45.793966    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:45.793978    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:45.818680    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:45.818691    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:45.831219    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:45.831234    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:45.846542    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:45.846553    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:45.864242    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:45.864252    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:48.377818    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:28:53.380015    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:28:53.380193    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:28:53.398906    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:28:53.399010    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:28:53.413124    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:28:53.413200    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:28:53.427436    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:28:53.427503    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:28:53.438498    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:28:53.438575    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:28:53.448492    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:28:53.448557    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:28:53.458633    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:28:53.458700    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:28:53.468814    3939 logs.go:276] 0 containers: []
	W0827 15:28:53.468827    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:28:53.468888    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:28:53.479365    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:28:53.479382    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:28:53.479387    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:28:53.499810    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:28:53.499821    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:28:53.514333    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:28:53.514343    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:28:53.526861    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:28:53.526871    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:28:53.538875    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:28:53.538885    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:28:53.563699    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:28:53.563708    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:28:53.603358    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:28:53.603368    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:28:53.616363    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:28:53.616373    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:28:53.654299    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:28:53.654315    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:28:53.668690    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:28:53.668703    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:28:53.679892    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:28:53.679902    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:28:53.697340    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:28:53.697350    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:28:53.708971    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:28:53.708981    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:28:53.742995    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:28:53.743007    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:28:53.757508    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:28:53.757521    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:28:53.761634    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:28:53.761640    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:28:56.282074    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:01.284264    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:01.284411    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:01.298266    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:01.298342    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:01.309256    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:01.309327    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:01.319316    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:01.319376    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:01.329904    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:01.329963    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:01.339776    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:01.339833    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:01.350298    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:01.350364    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:01.360809    3939 logs.go:276] 0 containers: []
	W0827 15:29:01.360820    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:01.360871    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:01.371409    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:01.371427    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:01.371434    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:01.386596    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:01.386608    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:01.421928    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:01.421941    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:01.435018    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:01.435031    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:01.446619    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:01.446633    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:01.485365    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:01.485374    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:01.489268    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:01.489276    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:01.503660    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:01.503670    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:01.544318    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:01.544328    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:01.556659    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:01.556671    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:01.569534    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:01.569545    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:01.586421    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:01.586432    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:01.597911    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:01.597922    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:01.609744    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:01.609754    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:01.627232    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:01.627248    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:01.651274    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:01.651289    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:04.164913    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:09.167007    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:09.167131    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:09.185118    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:09.185219    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:09.202155    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:09.202228    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:09.213775    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:09.213845    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:09.227946    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:09.228023    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:09.238450    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:09.238511    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:09.248675    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:09.248775    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:09.263122    3939 logs.go:276] 0 containers: []
	W0827 15:29:09.263136    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:09.263190    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:09.273795    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:09.273811    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:09.273818    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:09.288682    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:09.288693    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:09.302265    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:09.302275    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:09.316387    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:09.316401    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:09.328312    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:09.328347    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:09.339869    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:09.339883    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:09.351842    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:09.351856    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:09.369358    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:09.369368    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:09.380428    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:09.380438    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:09.392170    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:09.392181    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:09.404988    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:09.405000    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:09.443426    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:09.443435    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:09.448079    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:09.448087    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:09.483874    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:09.483888    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:09.498208    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:09.498221    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:09.535966    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:09.535978    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:12.061200    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:17.061393    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:17.061566    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:17.089458    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:17.089545    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:17.103108    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:17.103178    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:17.114427    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:17.114492    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:17.125001    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:17.125078    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:17.137964    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:17.138035    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:17.148156    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:17.148221    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:17.158226    3939 logs.go:276] 0 containers: []
	W0827 15:29:17.158237    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:17.158295    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:17.169417    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:17.169433    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:17.169439    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:17.184293    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:17.184304    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:17.208721    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:17.208731    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:17.226121    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:17.226132    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:17.238456    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:17.238470    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:17.249758    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:17.249769    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:17.285933    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:17.285943    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:17.324068    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:17.324083    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:17.337887    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:17.337898    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:17.349806    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:17.349817    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:17.353908    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:17.353916    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:17.372173    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:17.372187    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:17.384144    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:17.384153    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:17.396030    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:17.396042    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:17.408045    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:17.408056    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:17.446885    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:17.446898    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:19.967227    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:24.968875    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:24.969098    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:24.998202    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:24.998326    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:25.016813    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:25.016898    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:25.030596    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:25.030669    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:25.046490    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:25.046555    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:25.056953    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:25.057022    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:25.067588    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:25.067654    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:25.083703    3939 logs.go:276] 0 containers: []
	W0827 15:29:25.083717    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:25.083778    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:25.095856    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:25.095874    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:25.095880    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:25.134575    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:25.134584    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:25.152236    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:25.152247    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:25.168361    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:25.168373    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:25.185776    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:25.185787    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:25.198281    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:25.198294    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:25.203066    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:25.203073    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:25.218772    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:25.218783    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:25.253722    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:25.253736    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:25.269388    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:25.269401    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:25.310515    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:25.310527    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:25.322345    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:25.322360    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:25.333591    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:25.333604    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:25.352101    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:25.352115    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:25.364015    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:25.364026    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:25.375741    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:25.375753    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:27.902173    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:32.904660    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:32.904860    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:32.923717    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:32.923811    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:32.937385    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:32.937459    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:32.948560    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:32.948626    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:32.958659    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:32.958731    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:32.969327    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:32.969397    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:32.979606    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:32.979677    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:32.989425    3939 logs.go:276] 0 containers: []
	W0827 15:29:32.989436    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:32.989495    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:33.000118    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:33.000134    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:33.000139    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:33.012934    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:33.012945    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:33.025609    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:33.025620    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:33.037664    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:33.037674    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:33.075180    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:33.075191    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:33.090001    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:33.090014    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:33.101797    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:33.101808    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:33.113608    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:33.113622    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:33.125600    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:33.125611    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:33.143183    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:33.143195    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:33.147747    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:33.147754    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:33.161843    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:33.161857    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:33.178849    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:33.178860    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:33.202105    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:33.202114    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:33.238625    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:33.238634    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:33.250035    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:33.250046    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:35.788545    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:40.790657    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:40.790801    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:40.807438    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:40.807533    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:40.819676    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:40.819748    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:40.830314    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:40.830399    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:40.846279    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:40.846354    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:40.856919    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:40.856987    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:40.867897    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:40.867964    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:40.877915    3939 logs.go:276] 0 containers: []
	W0827 15:29:40.877926    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:40.877988    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:40.888710    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:40.888726    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:40.888732    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:40.893009    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:40.893018    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:40.929854    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:40.929868    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:40.941071    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:40.941083    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:40.954212    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:40.954229    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:40.995560    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:40.995568    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:41.008425    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:41.008437    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:41.020032    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:41.020044    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:41.042841    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:41.042849    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:41.054453    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:41.054463    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:41.066964    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:41.066978    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:41.085513    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:41.085522    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:41.122185    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:41.122197    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:41.135970    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:41.135984    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:41.150090    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:41.150100    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:41.162048    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:41.162058    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:43.678533    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:48.680973    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:48.681097    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:48.692975    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:48.693049    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:48.703593    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:48.703673    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:48.714255    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:48.714315    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:48.730002    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:48.730079    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:48.741113    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:48.741184    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:48.751270    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:48.751336    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:48.761475    3939 logs.go:276] 0 containers: []
	W0827 15:29:48.761485    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:48.761535    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:48.772003    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:48.772022    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:48.772027    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:48.786023    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:48.786033    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:48.800192    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:48.800204    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:48.814072    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:48.814084    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:48.828885    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:48.828896    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:48.852544    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:48.852552    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:48.869928    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:48.869939    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:48.882366    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:48.882376    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:48.893882    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:48.893893    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:48.906058    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:48.906071    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:48.945568    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:48.945580    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:48.957531    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:48.957545    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:48.968786    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:48.968795    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:48.980250    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:48.980260    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:48.984308    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:48.984317    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:49.020274    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:49.020286    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:51.560596    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:29:56.562392    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:29:56.562593    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:29:56.582109    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:29:56.582204    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:29:56.596590    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:29:56.596672    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:29:56.608960    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:29:56.609030    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:29:56.619985    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:29:56.620058    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:29:56.630754    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:29:56.630821    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:29:56.645288    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:29:56.645354    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:29:56.655372    3939 logs.go:276] 0 containers: []
	W0827 15:29:56.655384    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:29:56.655442    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:29:56.670373    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:29:56.670391    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:29:56.670397    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:29:56.682532    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:29:56.682546    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:29:56.694549    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:29:56.694564    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:29:56.706464    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:29:56.706479    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:29:56.741023    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:29:56.741038    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:29:56.754950    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:29:56.754961    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:29:56.769026    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:29:56.769039    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:29:56.783229    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:29:56.783243    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:29:56.794493    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:29:56.794504    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:29:56.812461    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:29:56.812471    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:29:56.830780    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:29:56.830791    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:29:56.855326    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:29:56.855334    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:29:56.869059    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:29:56.869072    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:29:56.881492    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:29:56.881505    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:29:56.919179    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:29:56.919190    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:29:56.957512    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:29:56.957523    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:29:59.464052    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:04.465105    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:04.465301    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:04.485917    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:04.486009    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:04.508018    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:04.508098    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:04.519585    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:04.519646    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:04.530388    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:04.530451    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:04.541094    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:04.541162    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:04.552140    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:04.552208    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:04.566589    3939 logs.go:276] 0 containers: []
	W0827 15:30:04.566601    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:04.566661    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:04.580088    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:04.580104    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:04.580109    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:04.592411    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:04.592423    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:04.604398    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:04.604409    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:04.642656    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:04.642669    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:04.685443    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:04.685456    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:04.700650    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:04.700665    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:04.714212    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:04.714229    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:04.737217    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:04.737226    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:04.772969    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:04.772979    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:04.787979    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:04.787992    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:04.801667    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:04.801676    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:04.819794    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:04.819806    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:04.823957    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:04.823965    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:04.835166    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:04.835180    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:04.846696    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:04.846708    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:04.858410    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:04.858423    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:07.375899    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:12.376738    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:12.376949    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:12.410543    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:12.410647    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:12.436248    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:12.436319    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:12.450776    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:12.450846    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:12.466072    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:12.466144    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:12.476458    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:12.476523    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:12.487150    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:12.487219    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:12.498513    3939 logs.go:276] 0 containers: []
	W0827 15:30:12.498524    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:12.498581    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:12.515698    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:12.515717    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:12.515723    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:12.532320    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:12.532331    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:12.550239    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:12.550251    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:12.562224    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:12.562235    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:12.598332    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:12.598346    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:12.612755    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:12.612765    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:12.624327    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:12.624342    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:12.637994    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:12.638007    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:12.650520    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:12.650530    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:12.673533    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:12.673543    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:12.687286    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:12.687301    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:12.702622    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:12.702633    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:12.717457    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:12.717472    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:12.729821    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:12.729837    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:12.769634    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:12.769645    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:12.774271    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:12.774277    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:15.316347    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:20.318622    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:20.318804    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:20.338154    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:20.338250    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:20.352326    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:20.352402    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:20.364431    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:20.364524    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:20.375450    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:20.375523    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:20.390583    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:20.390649    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:20.401244    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:20.401312    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:20.411211    3939 logs.go:276] 0 containers: []
	W0827 15:30:20.411221    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:20.411282    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:20.421331    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:20.421347    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:20.421352    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:20.459046    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:20.459060    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:20.475271    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:20.475285    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:20.487135    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:20.487145    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:20.500839    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:20.500852    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:20.524066    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:20.524072    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:20.561211    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:20.561223    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:20.573747    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:20.573760    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:20.592272    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:20.592283    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:20.606294    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:20.606305    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:20.623858    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:20.623871    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:20.635931    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:20.635943    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:20.647832    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:20.647842    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:20.651867    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:20.651874    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:20.694574    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:20.694587    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:20.708966    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:20.708979    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:23.226211    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:28.228062    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:28.228207    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:28.241892    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:28.241977    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:28.253478    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:28.253552    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:28.264586    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:28.264653    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:28.274829    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:28.274895    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:28.284981    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:28.285047    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:28.296111    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:28.296193    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:28.307586    3939 logs.go:276] 0 containers: []
	W0827 15:30:28.307599    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:28.307662    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:28.317995    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:28.318026    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:28.318032    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:28.352931    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:28.352945    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:28.364485    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:28.364498    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:28.405324    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:28.405338    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:28.418885    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:28.418897    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:28.440495    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:28.440506    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:28.452947    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:28.452961    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:28.467876    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:28.467889    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:28.478750    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:28.478762    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:28.496580    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:28.496591    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:28.508243    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:28.508257    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:28.548205    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:28.548224    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:28.553050    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:28.553059    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:28.567533    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:28.567547    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:28.579917    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:28.579928    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:28.591784    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:28.591796    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:31.106832    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:36.108963    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:36.109137    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:36.121620    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:36.121701    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:36.132891    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:36.132958    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:36.143614    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:36.143675    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:36.154065    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:36.154136    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:36.164966    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:36.165029    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:36.175981    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:36.176055    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:36.186107    3939 logs.go:276] 0 containers: []
	W0827 15:30:36.186121    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:36.186184    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:36.196305    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:36.196327    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:36.196333    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:36.235748    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:36.235757    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:36.271136    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:36.271147    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:36.283155    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:36.283169    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:36.287095    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:36.287101    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:36.301683    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:36.301698    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:36.316157    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:36.316170    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:36.327943    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:36.327957    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:36.339043    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:36.339057    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:36.350400    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:36.350412    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:36.362444    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:36.362457    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:36.379698    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:36.379710    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:36.392955    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:36.392967    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:36.416604    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:36.416613    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:36.453798    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:36.453810    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:36.468720    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:36.468730    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:38.983089    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:43.985506    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:43.985653    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:30:44.000611    3939 logs.go:276] 2 containers: [1d02b2763b1e 9cd919fac506]
	I0827 15:30:44.000684    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:30:44.012472    3939 logs.go:276] 2 containers: [d60f8a8d5af4 a9f742447589]
	I0827 15:30:44.012541    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:30:44.023173    3939 logs.go:276] 1 containers: [7d2a74cb998e]
	I0827 15:30:44.023243    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:30:44.033802    3939 logs.go:276] 2 containers: [0fbf50c0b993 165d46598547]
	I0827 15:30:44.033867    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:30:44.044482    3939 logs.go:276] 1 containers: [141a0b958b51]
	I0827 15:30:44.044554    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:30:44.055862    3939 logs.go:276] 2 containers: [7ce329c8fc2e 585e47bfe28a]
	I0827 15:30:44.055926    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:30:44.070736    3939 logs.go:276] 0 containers: []
	W0827 15:30:44.070747    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:30:44.070810    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:30:44.080856    3939 logs.go:276] 1 containers: [d954b50b583e]
	I0827 15:30:44.080874    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:30:44.080879    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:30:44.117082    3939 logs.go:123] Gathering logs for kube-controller-manager [7ce329c8fc2e] ...
	I0827 15:30:44.117093    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce329c8fc2e"
	I0827 15:30:44.134661    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:30:44.134671    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:30:44.146063    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:30:44.146073    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:30:44.150024    3939 logs.go:123] Gathering logs for kube-controller-manager [585e47bfe28a] ...
	I0827 15:30:44.150030    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585e47bfe28a"
	I0827 15:30:44.162832    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:30:44.162842    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:30:44.184585    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:30:44.184594    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:30:44.221749    3939 logs.go:123] Gathering logs for kube-apiserver [9cd919fac506] ...
	I0827 15:30:44.221762    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cd919fac506"
	I0827 15:30:44.259136    3939 logs.go:123] Gathering logs for etcd [d60f8a8d5af4] ...
	I0827 15:30:44.259149    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d60f8a8d5af4"
	I0827 15:30:44.273251    3939 logs.go:123] Gathering logs for etcd [a9f742447589] ...
	I0827 15:30:44.273261    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9f742447589"
	I0827 15:30:44.287561    3939 logs.go:123] Gathering logs for coredns [7d2a74cb998e] ...
	I0827 15:30:44.287576    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d2a74cb998e"
	I0827 15:30:44.299143    3939 logs.go:123] Gathering logs for kube-scheduler [165d46598547] ...
	I0827 15:30:44.299153    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d46598547"
	I0827 15:30:44.310622    3939 logs.go:123] Gathering logs for storage-provisioner [d954b50b583e] ...
	I0827 15:30:44.310635    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d954b50b583e"
	I0827 15:30:44.322098    3939 logs.go:123] Gathering logs for kube-apiserver [1d02b2763b1e] ...
	I0827 15:30:44.322109    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d02b2763b1e"
	I0827 15:30:44.335925    3939 logs.go:123] Gathering logs for kube-proxy [141a0b958b51] ...
	I0827 15:30:44.335937    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 141a0b958b51"
	I0827 15:30:44.376113    3939 logs.go:123] Gathering logs for kube-scheduler [0fbf50c0b993] ...
	I0827 15:30:44.376126    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fbf50c0b993"
	I0827 15:30:46.888624    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:51.890926    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:30:51.891029    3939 kubeadm.go:597] duration metric: took 4m3.603192792s to restartPrimaryControlPlane
	W0827 15:30:51.891131    3939 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0827 15:30:51.891180    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0827 15:30:52.914460    3939 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.023298167s)
	I0827 15:30:52.914541    3939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 15:30:52.919368    3939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 15:30:52.922116    3939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 15:30:52.924913    3939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 15:30:52.924919    3939 kubeadm.go:157] found existing configuration files:
	
	I0827 15:30:52.924946    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf
	I0827 15:30:52.927792    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 15:30:52.927819    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 15:30:52.930822    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf
	I0827 15:30:52.933564    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 15:30:52.933596    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 15:30:52.936579    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf
	I0827 15:30:52.939597    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 15:30:52.939620    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 15:30:52.942476    3939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf
	I0827 15:30:52.945066    3939 kubeadm.go:163] "https://control-plane.minikube.internal:50493" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50493 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 15:30:52.945085    3939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 15:30:52.948263    3939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 15:30:52.967538    3939 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0827 15:30:52.967683    3939 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 15:30:53.015345    3939 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 15:30:53.015404    3939 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 15:30:53.015458    3939 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 15:30:53.070279    3939 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 15:30:53.075322    3939 out.go:235]   - Generating certificates and keys ...
	I0827 15:30:53.075364    3939 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 15:30:53.075401    3939 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 15:30:53.075442    3939 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 15:30:53.075474    3939 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 15:30:53.075513    3939 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 15:30:53.075542    3939 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 15:30:53.075579    3939 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 15:30:53.075614    3939 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 15:30:53.075653    3939 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 15:30:53.075689    3939 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 15:30:53.075708    3939 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 15:30:53.075747    3939 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 15:30:53.173526    3939 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 15:30:53.273948    3939 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 15:30:53.305423    3939 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 15:30:53.590267    3939 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 15:30:53.620918    3939 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 15:30:53.621269    3939 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 15:30:53.621318    3939 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 15:30:53.703145    3939 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 15:30:53.708327    3939 out.go:235]   - Booting up control plane ...
	I0827 15:30:53.708379    3939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 15:30:53.708424    3939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 15:30:53.708465    3939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 15:30:53.708510    3939 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 15:30:53.708594    3939 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 15:30:58.208215    3939 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501412 seconds
	I0827 15:30:58.208285    3939 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 15:30:58.212255    3939 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 15:30:58.719877    3939 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 15:30:58.720001    3939 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-443000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 15:30:59.223353    3939 kubeadm.go:310] [bootstrap-token] Using token: 7c6cpc.ok1xbhjqz814b55n
	I0827 15:30:59.229128    3939 out.go:235]   - Configuring RBAC rules ...
	I0827 15:30:59.229193    3939 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 15:30:59.229247    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 15:30:59.236841    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 15:30:59.237609    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 15:30:59.238404    3939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 15:30:59.239139    3939 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 15:30:59.242079    3939 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 15:30:59.393861    3939 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 15:30:59.627276    3939 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 15:30:59.627894    3939 kubeadm.go:310] 
	I0827 15:30:59.627927    3939 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 15:30:59.627955    3939 kubeadm.go:310] 
	I0827 15:30:59.627996    3939 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 15:30:59.628000    3939 kubeadm.go:310] 
	I0827 15:30:59.628015    3939 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 15:30:59.628062    3939 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 15:30:59.628094    3939 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 15:30:59.628122    3939 kubeadm.go:310] 
	I0827 15:30:59.628153    3939 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 15:30:59.628156    3939 kubeadm.go:310] 
	I0827 15:30:59.628186    3939 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 15:30:59.628191    3939 kubeadm.go:310] 
	I0827 15:30:59.628230    3939 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 15:30:59.628276    3939 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 15:30:59.628355    3939 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 15:30:59.628364    3939 kubeadm.go:310] 
	I0827 15:30:59.628425    3939 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 15:30:59.628468    3939 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 15:30:59.628470    3939 kubeadm.go:310] 
	I0827 15:30:59.628509    3939 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7c6cpc.ok1xbhjqz814b55n \
	I0827 15:30:59.628572    3939 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 \
	I0827 15:30:59.628592    3939 kubeadm.go:310] 	--control-plane 
	I0827 15:30:59.628594    3939 kubeadm.go:310] 
	I0827 15:30:59.628633    3939 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 15:30:59.628639    3939 kubeadm.go:310] 
	I0827 15:30:59.628700    3939 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7c6cpc.ok1xbhjqz814b55n \
	I0827 15:30:59.628757    3939 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e40211cdbb70880cf4203fcff26994c3c3ef69e4bd2b230e97a832f2aa67022 
	I0827 15:30:59.628914    3939 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 15:30:59.628925    3939 cni.go:84] Creating CNI manager for ""
	I0827 15:30:59.628934    3939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:30:59.633071    3939 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 15:30:59.638955    3939 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 15:30:59.641962    3939 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 15:30:59.646706    3939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 15:30:59.646748    3939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 15:30:59.646749    3939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-443000 minikube.k8s.io/updated_at=2024_08_27T15_30_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=stopped-upgrade-443000 minikube.k8s.io/primary=true
	I0827 15:30:59.649827    3939 ops.go:34] apiserver oom_adj: -16
	I0827 15:30:59.689361    3939 kubeadm.go:1113] duration metric: took 42.647625ms to wait for elevateKubeSystemPrivileges
	I0827 15:30:59.689376    3939 kubeadm.go:394] duration metric: took 4m11.415037375s to StartCluster
	I0827 15:30:59.689386    3939 settings.go:142] acquiring lock: {Name:mk8039639095abb20902a2ce8e0a004770b18340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:30:59.689474    3939 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:30:59.689885    3939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/kubeconfig: {Name:mk76bdfc088f48bbbf806c94a3244a333f8aeabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:30:59.690100    3939 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:30:59.690109    3939 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 15:30:59.690149    3939 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-443000"
	I0827 15:30:59.690162    3939 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-443000"
	W0827 15:30:59.690167    3939 addons.go:243] addon storage-provisioner should already be in state true
	I0827 15:30:59.690179    3939 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0827 15:30:59.690182    3939 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-443000"
	I0827 15:30:59.690208    3939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-443000"
	I0827 15:30:59.690210    3939 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:30:59.691122    3939 kapi.go:59] client config for stopped-upgrade-443000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/profiles/stopped-upgrade-443000/client.key", CAFile:"/Users/jenkins/minikube-integration/19522-983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fdbeb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 15:30:59.691237    3939 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-443000"
	W0827 15:30:59.691242    3939 addons.go:243] addon default-storageclass should already be in state true
	I0827 15:30:59.691250    3939 host.go:66] Checking if "stopped-upgrade-443000" exists ...
	I0827 15:30:59.693001    3939 out.go:177] * Verifying Kubernetes components...
	I0827 15:30:59.693304    3939 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 15:30:59.697183    3939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 15:30:59.697190    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:30:59.700866    3939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 15:30:59.704023    3939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 15:30:59.706957    3939 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:30:59.706964    3939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 15:30:59.706971    3939 sshutil.go:53] new ssh client: &{IP:localhost Port:50458 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/stopped-upgrade-443000/id_rsa Username:docker}
	I0827 15:30:59.797021    3939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 15:30:59.803481    3939 api_server.go:52] waiting for apiserver process to appear ...
	I0827 15:30:59.803526    3939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 15:30:59.808073    3939 api_server.go:72] duration metric: took 117.96625ms to wait for apiserver process to appear ...
	I0827 15:30:59.808081    3939 api_server.go:88] waiting for apiserver healthz status ...
	I0827 15:30:59.808089    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:30:59.842517    3939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 15:30:59.858351    3939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 15:31:00.231185    3939 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 15:31:00.231198    3939 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 15:31:04.810068    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:04.810089    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:09.810142    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:09.810194    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:14.810340    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:14.810377    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:19.810688    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:19.810725    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:24.811175    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:24.811208    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:29.811773    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:29.811801    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0827 15:31:30.232545    3939 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0827 15:31:30.237817    3939 out.go:177] * Enabled addons: storage-provisioner
	I0827 15:31:30.249609    3939 addons.go:510] duration metric: took 30.560510417s for enable addons: enabled=[storage-provisioner]
	I0827 15:31:34.812567    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:34.812627    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:39.813789    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:39.813825    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:44.815224    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:44.815260    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:49.817054    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:49.817077    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:54.819139    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:54.819182    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:31:59.821283    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:31:59.821384    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:31:59.832173    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:31:59.832270    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:31:59.842747    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:31:59.842817    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:31:59.853946    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:31:59.854020    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:31:59.865041    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:31:59.865108    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:31:59.875212    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:31:59.875277    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:31:59.885579    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:31:59.885639    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:31:59.895538    3939 logs.go:276] 0 containers: []
	W0827 15:31:59.895551    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:31:59.895607    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:31:59.907394    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:31:59.907410    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:31:59.907417    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:31:59.912025    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:31:59.912032    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:31:59.926342    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:31:59.926352    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:31:59.938212    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:31:59.938224    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:31:59.953537    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:31:59.953552    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:31:59.967141    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:31:59.967155    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:31:59.979717    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:31:59.979732    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:00.018860    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:00.018871    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:00.056825    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:00.056837    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:00.070965    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:00.070978    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:00.083129    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:00.083142    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:00.094592    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:00.094604    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:00.112360    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:00.112373    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:02.637311    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:07.637957    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:07.638223    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:07.661446    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:07.661546    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:07.677887    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:07.677971    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:07.690730    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:07.690797    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:07.701617    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:07.701700    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:07.712205    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:07.712270    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:07.722965    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:07.723033    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:07.732848    3939 logs.go:276] 0 containers: []
	W0827 15:32:07.732859    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:07.732906    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:07.744058    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:07.744073    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:07.744079    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:07.769123    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:07.769135    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:07.805792    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:07.805799    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:07.810713    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:07.810721    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:07.852830    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:07.852843    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:07.864959    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:07.864968    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:07.877342    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:07.877355    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:07.899984    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:07.899995    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:07.914777    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:07.914791    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:07.930057    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:07.930069    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:07.945671    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:07.945685    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:07.957715    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:07.957726    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:07.970274    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:07.970285    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:10.486439    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:15.489202    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:15.489601    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:15.522920    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:15.523048    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:15.542359    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:15.542426    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:15.557713    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:15.557769    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:15.569500    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:15.569556    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:15.581283    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:15.581364    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:15.593210    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:15.593296    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:15.605761    3939 logs.go:276] 0 containers: []
	W0827 15:32:15.605775    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:15.605830    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:15.618049    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:15.618065    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:15.618072    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:15.657743    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:15.657766    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:15.670748    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:15.670765    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:15.690160    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:15.690181    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:15.716707    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:15.716725    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:15.729774    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:15.729784    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:15.734798    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:15.734805    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:15.771316    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:15.771328    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:15.785966    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:15.785976    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:15.800485    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:15.800496    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:15.812077    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:15.812087    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:15.827063    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:15.827075    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:15.841607    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:15.841617    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:18.356135    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:23.358260    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:23.358685    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:23.392883    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:23.393018    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:23.412440    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:23.412531    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:23.427269    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:23.427342    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:23.439734    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:23.439805    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:23.450824    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:23.450894    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:23.461476    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:23.461537    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:23.471832    3939 logs.go:276] 0 containers: []
	W0827 15:32:23.471843    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:23.471898    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:23.483047    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:23.483063    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:23.483070    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:23.496209    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:23.496224    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:23.535725    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:23.535734    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:23.570112    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:23.570134    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:23.584962    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:23.584974    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:23.596903    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:23.596915    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:23.610090    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:23.610102    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:23.624692    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:23.624705    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:23.636394    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:23.636403    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:23.650451    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:23.650465    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:23.662175    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:23.662187    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:23.682288    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:23.682298    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:23.693572    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:23.693584    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:26.220241    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:31.222567    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:31.222998    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:31.273492    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:31.273610    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:31.294045    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:31.294156    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:31.308470    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:31.308539    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:31.320604    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:31.320666    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:31.333438    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:31.333507    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:31.344989    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:31.345052    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:31.359623    3939 logs.go:276] 0 containers: []
	W0827 15:32:31.359638    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:31.359690    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:31.370409    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:31.370423    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:31.370428    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:31.405762    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:31.405776    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:31.421039    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:31.421048    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:31.432722    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:31.432737    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:31.444612    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:31.444625    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:31.460260    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:31.460272    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:31.484643    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:31.484653    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:31.509815    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:31.509830    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:31.514765    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:31.514771    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:31.528556    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:31.528568    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:31.541119    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:31.541131    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:31.557008    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:31.557022    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:31.570661    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:31.570671    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:34.110589    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:39.113186    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:39.113409    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:39.143633    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:39.143752    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:39.162513    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:39.162593    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:39.179651    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:39.179723    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:39.190706    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:39.190760    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:39.201394    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:39.201467    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:39.211563    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:39.211631    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:39.221444    3939 logs.go:276] 0 containers: []
	W0827 15:32:39.221455    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:39.221505    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:39.231836    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:39.231852    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:39.231857    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:39.256810    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:39.256820    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:39.295025    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:39.295033    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:39.299309    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:39.299318    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:39.318276    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:39.318290    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:39.330556    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:39.330566    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:39.342374    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:39.342389    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:39.356984    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:39.356996    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:39.372809    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:39.372822    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:39.410289    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:39.410301    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:39.425762    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:39.425775    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:39.440208    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:39.440221    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:39.457567    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:39.457577    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:41.970905    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:46.973468    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:46.973912    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:47.010280    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:47.010403    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:47.031932    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:47.032049    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:47.051805    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:47.051878    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:47.063482    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:47.063550    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:47.074173    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:47.074243    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:47.085079    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:47.085143    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:47.095434    3939 logs.go:276] 0 containers: []
	W0827 15:32:47.095445    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:47.095508    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:47.108064    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:47.108079    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:47.108085    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:47.131030    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:47.131039    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:47.142536    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:47.142546    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:47.162388    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:47.162398    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:47.178780    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:47.178793    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:47.214630    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:47.214642    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:47.226613    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:47.226625    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:47.238634    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:47.238647    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:47.254108    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:47.254121    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:47.265641    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:47.265655    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:47.283888    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:47.283901    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:47.321176    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:47.321187    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:47.325402    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:47.325410    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:49.844060    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:32:54.846494    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:32:54.846741    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:32:54.871459    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:32:54.871574    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:32:54.888307    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:32:54.888377    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:32:54.901005    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:32:54.901080    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:32:54.912183    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:32:54.912249    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:32:54.922274    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:32:54.922335    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:32:54.932602    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:32:54.932661    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:32:54.942521    3939 logs.go:276] 0 containers: []
	W0827 15:32:54.942532    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:32:54.942584    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:32:54.952915    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:32:54.952933    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:32:54.952938    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:32:54.989926    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:32:54.989936    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:32:54.994459    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:32:54.994467    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:32:55.030860    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:32:55.030873    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:32:55.045613    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:32:55.045623    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:32:55.057058    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:32:55.057070    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:32:55.071151    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:32:55.071162    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:32:55.089681    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:32:55.089694    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:32:55.106094    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:32:55.106104    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:32:55.117837    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:32:55.117847    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:32:55.129402    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:32:55.129415    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:32:55.140907    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:32:55.140920    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:32:55.166048    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:32:55.166055    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:32:57.679893    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:02.682526    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:02.682834    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:02.720638    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:02.720784    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:02.741491    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:02.741607    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:02.756410    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:02.756485    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:02.768628    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:02.768696    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:02.779770    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:02.779842    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:02.791240    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:02.791311    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:02.801567    3939 logs.go:276] 0 containers: []
	W0827 15:33:02.801580    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:02.801634    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:02.812499    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:02.812516    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:02.812521    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:02.847424    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:02.847435    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:02.862326    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:02.862339    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:02.874257    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:02.874270    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:02.888861    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:02.888870    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:02.900878    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:02.900890    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:02.939456    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:02.939466    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:02.943736    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:02.943744    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:02.956411    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:02.956422    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:02.974235    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:02.974244    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:02.998834    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:02.998841    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:03.012106    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:03.012119    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:03.032823    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:03.032835    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:05.547096    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:10.549553    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:10.549683    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:10.570275    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:10.570352    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:10.585345    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:10.585432    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:10.600500    3939 logs.go:276] 2 containers: [58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:10.600561    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:10.611114    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:10.611169    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:10.621935    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:10.621994    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:10.632872    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:10.632924    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:10.643620    3939 logs.go:276] 0 containers: []
	W0827 15:33:10.643633    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:10.643687    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:10.654383    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:10.654398    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:10.654404    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:10.666403    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:10.666415    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:10.678659    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:10.678671    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:10.689999    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:10.690009    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:10.701904    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:10.701914    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:10.717177    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:10.717189    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:10.755044    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:10.755050    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:10.759232    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:10.759242    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:10.810900    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:10.810917    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:10.841991    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:10.842004    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:10.862769    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:10.862784    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:10.893122    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:10.893142    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:10.924647    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:10.924661    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:13.441905    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:18.444186    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:18.444540    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:18.493027    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:18.493150    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:18.511047    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:18.511118    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:18.525052    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:18.525130    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:18.536625    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:18.536681    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:18.547371    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:18.547440    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:18.557957    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:18.558010    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:18.568597    3939 logs.go:276] 0 containers: []
	W0827 15:33:18.568609    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:18.568668    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:18.579082    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:18.579099    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:18.579107    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:18.583366    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:33:18.583375    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:33:18.594569    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:18.594580    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:18.606832    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:18.606844    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:18.625072    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:18.625081    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:18.660231    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:18.660244    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:18.675146    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:18.675159    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:18.686475    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:18.686485    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:18.698262    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:18.698276    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:18.735291    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:18.735302    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:18.755563    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:18.755575    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:18.767462    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:18.767472    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:18.780807    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:33:18.780818    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:33:18.791576    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:18.791587    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:18.802633    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:18.802642    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:21.327284    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:26.329695    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:26.330054    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:26.363546    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:26.363663    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:26.382676    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:26.382788    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:26.396697    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:26.396770    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:26.409074    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:26.409140    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:26.423341    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:26.423410    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:26.435188    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:26.435265    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:26.446145    3939 logs.go:276] 0 containers: []
	W0827 15:33:26.446156    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:26.446212    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:26.457547    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:26.457568    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:26.457573    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:26.472352    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:33:26.472363    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:33:26.484817    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:26.484827    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:26.497396    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:26.497411    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:26.505378    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:26.505390    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:26.519694    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:26.519702    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:26.532197    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:26.532208    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:26.551954    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:26.551965    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:26.573095    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:26.573106    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:26.586169    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:26.586181    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:26.622213    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:33:26.622220    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:33:26.634906    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:26.634916    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:26.647005    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:26.647016    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:26.682169    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:26.682181    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:26.697793    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:26.697805    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:29.224859    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:34.226493    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:34.226916    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:34.266747    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:34.266866    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:34.288044    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:34.288146    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:34.304079    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:34.304162    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:34.316785    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:34.316844    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:34.328431    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:34.328492    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:34.339925    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:34.339994    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:34.351009    3939 logs.go:276] 0 containers: []
	W0827 15:33:34.351025    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:34.351081    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:34.363697    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:34.363715    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:34.363720    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:34.398911    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:34.398921    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:34.411911    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:34.411920    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:34.424350    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:34.424362    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:34.429204    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:34.429213    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:34.447797    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:33:34.447809    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:33:34.460986    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:34.460999    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:34.475815    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:33:34.475827    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:33:34.493298    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:34.493310    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:34.508498    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:34.508508    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:34.520816    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:34.520830    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:34.544374    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:34.544381    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:34.559466    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:34.559477    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:34.575744    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:34.575759    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:34.588291    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:34.588305    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:37.127638    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:42.130233    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:42.130342    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:42.142451    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:42.142509    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:42.153979    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:42.154033    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:42.166886    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:42.166969    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:42.180753    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:42.180804    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:42.191968    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:42.192031    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:42.204498    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:42.204548    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:42.217375    3939 logs.go:276] 0 containers: []
	W0827 15:33:42.217387    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:42.217437    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:42.231699    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:42.231716    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:33:42.231721    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:33:42.243846    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:33:42.243858    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:33:42.257749    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:42.257761    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:42.270352    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:42.270363    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:42.282310    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:42.282323    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:42.308305    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:42.308316    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:42.348033    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:42.348053    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:42.353212    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:42.353223    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:42.390631    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:42.390642    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:42.406554    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:42.406566    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:42.421794    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:42.421806    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:42.443582    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:42.443592    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:42.457289    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:42.457302    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:42.472287    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:42.472301    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:42.491096    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:42.491109    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:45.007857    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:50.010203    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:50.010670    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:50.049901    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:50.050048    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:50.072150    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:50.072260    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:50.088075    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:50.088157    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:50.103895    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:50.103951    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:50.114750    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:50.114806    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:50.125123    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:50.125184    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:50.135023    3939 logs.go:276] 0 containers: []
	W0827 15:33:50.135032    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:50.135080    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:50.145674    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:50.145696    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:50.145702    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:50.157375    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:50.157387    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:50.195806    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:50.195817    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:50.200503    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:50.200509    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:50.235954    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:50.235966    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:50.251199    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:50.251210    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:50.262540    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:50.262553    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:33:50.274486    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:50.274499    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:50.291452    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:50.291463    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:50.306435    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:33:50.306448    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:33:50.317649    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:50.317661    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:50.333444    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:50.333455    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:50.357004    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:50.357016    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:50.368671    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:33:50.368682    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:33:50.386509    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:50.386523    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:52.902296    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:33:57.904504    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:33:57.904614    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:33:57.915807    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:33:57.915877    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:33:57.926031    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:33:57.926098    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:33:57.938153    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:33:57.938221    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:33:57.951267    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:33:57.951325    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:33:57.961386    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:33:57.961452    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:33:57.971997    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:33:57.972060    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:33:57.981749    3939 logs.go:276] 0 containers: []
	W0827 15:33:57.981760    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:33:57.981811    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:33:57.991968    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:33:57.991987    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:33:57.991993    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:33:58.003634    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:33:58.003648    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:33:58.018260    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:33:58.018274    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:33:58.040203    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:33:58.040216    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:33:58.055161    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:33:58.055175    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:33:58.066810    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:33:58.066823    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:33:58.084350    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:33:58.084360    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:33:58.123647    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:33:58.123656    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:33:58.158885    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:33:58.158897    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:33:58.172405    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:33:58.172416    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:33:58.187505    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:33:58.187517    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:33:58.192241    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:33:58.192250    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:33:58.218347    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:33:58.218354    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:33:58.230083    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:33:58.230097    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:33:58.242950    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:33:58.242964    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:00.756962    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:05.759624    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:05.759690    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:05.772116    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:05.772190    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:05.783720    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:05.783773    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:05.799170    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:05.799237    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:05.810338    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:05.810390    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:05.822394    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:05.822451    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:05.833744    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:05.833795    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:05.844263    3939 logs.go:276] 0 containers: []
	W0827 15:34:05.844277    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:05.844338    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:05.856267    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:05.856283    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:05.856289    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:05.897664    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:05.897686    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:05.936422    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:05.936438    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:05.955361    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:05.955370    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:05.960005    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:05.960016    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:05.973691    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:05.973703    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:05.987427    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:05.987437    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:05.999384    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:05.999398    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:06.011996    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:06.012009    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:06.025243    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:06.025254    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:06.049894    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:06.049910    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:06.065382    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:06.065398    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:06.080994    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:06.081006    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:06.099334    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:06.099346    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:06.115767    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:06.115779    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:08.631172    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:13.633847    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:13.634091    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:13.657011    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:13.657113    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:13.673197    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:13.673278    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:13.685458    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:13.685529    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:13.698779    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:13.698846    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:13.709260    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:13.709325    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:13.719880    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:13.719940    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:13.729423    3939 logs.go:276] 0 containers: []
	W0827 15:34:13.729434    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:13.729494    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:13.740026    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:13.740041    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:13.740047    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:13.744635    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:13.744643    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:13.758554    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:13.758567    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:13.774470    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:13.774481    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:13.788766    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:13.788778    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:13.806593    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:13.806603    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:13.817780    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:13.817790    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:13.841065    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:13.841071    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:13.855593    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:13.855603    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:13.867661    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:13.867670    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:13.879425    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:13.879436    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:13.891035    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:13.891044    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:13.902694    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:13.902708    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:13.914766    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:13.914777    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:13.954137    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:13.954146    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:16.491839    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:21.494490    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:21.494910    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:21.528713    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:21.528823    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:21.549597    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:21.549669    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:21.562384    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:21.562452    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:21.572926    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:21.572997    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:21.583102    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:21.583167    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:21.594093    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:21.594157    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:21.608605    3939 logs.go:276] 0 containers: []
	W0827 15:34:21.608616    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:21.608670    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:21.618775    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:21.618793    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:21.618798    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:21.630215    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:21.630225    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:21.642175    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:21.642187    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:21.667098    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:21.667106    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:21.671236    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:21.671242    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:21.682820    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:21.682833    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:21.695889    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:21.695903    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:21.708101    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:21.708112    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:21.719611    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:21.719627    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:21.754887    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:21.754902    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:21.769081    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:21.769092    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:21.787081    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:21.787094    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:21.826658    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:21.826666    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:21.841396    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:21.841405    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:21.853383    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:21.853396    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:24.369959    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:29.372635    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:29.373001    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:29.404692    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:29.404817    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:29.423828    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:29.423923    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:29.438922    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:29.438999    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:29.450328    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:29.450392    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:29.462756    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:29.462827    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:29.475405    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:29.475472    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:29.485949    3939 logs.go:276] 0 containers: []
	W0827 15:34:29.485960    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:29.486017    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:29.496162    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:29.496182    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:29.496187    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:29.500579    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:29.500586    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:29.544572    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:29.544586    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:29.559768    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:29.559782    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:29.585174    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:29.585193    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:29.598251    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:29.598263    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:29.611527    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:29.611538    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:29.630095    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:29.630110    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:29.643433    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:29.643445    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:29.682184    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:29.682206    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:29.697742    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:29.697758    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:29.711283    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:29.711295    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:29.728243    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:29.728256    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:29.750366    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:29.750379    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:29.762805    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:29.762816    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:32.277517    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:37.279604    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:37.279731    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:37.293680    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:37.293743    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:37.304631    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:37.304698    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:37.315462    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:37.315537    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:37.326487    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:37.326564    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:37.337602    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:37.337668    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:37.348152    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:37.348217    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:37.358725    3939 logs.go:276] 0 containers: []
	W0827 15:34:37.358735    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:37.358787    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:37.369793    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:37.369809    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:37.369814    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:37.406698    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:37.406706    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:37.441683    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:37.441697    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:37.456816    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:37.456827    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:37.481572    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:37.481583    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:37.485922    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:37.485928    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:37.501032    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:37.501042    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:37.513321    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:37.513342    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:37.526152    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:37.526165    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:37.538599    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:37.538611    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:37.554987    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:37.555001    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:37.567581    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:37.567594    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:37.582935    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:37.582948    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:37.594625    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:37.594635    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:37.606877    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:37.606890    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:40.124937    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:45.127265    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:45.127795    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:45.166701    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:45.166841    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:45.190473    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:45.190561    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:45.205986    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:45.206062    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:45.223548    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:45.223612    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:45.235021    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:45.235101    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:45.248040    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:45.248111    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:45.259069    3939 logs.go:276] 0 containers: []
	W0827 15:34:45.259079    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:45.259128    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:45.274927    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:45.274941    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:45.274946    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:45.287135    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:45.287146    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:45.301775    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:45.301788    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:45.314656    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:45.314668    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:45.353305    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:45.353314    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:45.365752    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:45.365762    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:45.381040    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:45.381050    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:45.395461    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:45.395471    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:45.408154    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:45.408165    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:45.428201    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:45.428210    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:45.440113    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:45.440124    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:45.463380    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:45.463389    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:45.467828    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:45.467837    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:45.505586    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:45.505602    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:45.518021    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:45.518036    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:48.036622    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:34:53.039403    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:34:53.039843    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0827 15:34:53.080817    3939 logs.go:276] 1 containers: [b25765a6d551]
	I0827 15:34:53.080946    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0827 15:34:53.103310    3939 logs.go:276] 1 containers: [82641749cd0c]
	I0827 15:34:53.103419    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0827 15:34:53.118870    3939 logs.go:276] 4 containers: [2add41d01d7d 2a70888b747e 58d1b38de4f5 35ab6f2ba825]
	I0827 15:34:53.118939    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0827 15:34:53.132114    3939 logs.go:276] 1 containers: [27e3aa7dacdd]
	I0827 15:34:53.132177    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0827 15:34:53.143553    3939 logs.go:276] 1 containers: [7ce505eea008]
	I0827 15:34:53.143619    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0827 15:34:53.154710    3939 logs.go:276] 1 containers: [5c269e7d8105]
	I0827 15:34:53.154777    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0827 15:34:53.165473    3939 logs.go:276] 0 containers: []
	W0827 15:34:53.165483    3939 logs.go:278] No container was found matching "kindnet"
	I0827 15:34:53.165527    3939 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0827 15:34:53.178675    3939 logs.go:276] 1 containers: [a2049b8bd96d]
	I0827 15:34:53.178694    3939 logs.go:123] Gathering logs for coredns [58d1b38de4f5] ...
	I0827 15:34:53.178700    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d1b38de4f5"
	I0827 15:34:53.190644    3939 logs.go:123] Gathering logs for coredns [35ab6f2ba825] ...
	I0827 15:34:53.190654    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35ab6f2ba825"
	I0827 15:34:53.203979    3939 logs.go:123] Gathering logs for kube-scheduler [27e3aa7dacdd] ...
	I0827 15:34:53.203991    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27e3aa7dacdd"
	I0827 15:34:53.219453    3939 logs.go:123] Gathering logs for coredns [2a70888b747e] ...
	I0827 15:34:53.219464    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a70888b747e"
	I0827 15:34:53.231294    3939 logs.go:123] Gathering logs for kube-proxy [7ce505eea008] ...
	I0827 15:34:53.231307    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ce505eea008"
	I0827 15:34:53.243595    3939 logs.go:123] Gathering logs for kubelet ...
	I0827 15:34:53.243604    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 15:34:53.282123    3939 logs.go:123] Gathering logs for dmesg ...
	I0827 15:34:53.282131    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 15:34:53.287093    3939 logs.go:123] Gathering logs for describe nodes ...
	I0827 15:34:53.287103    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 15:34:53.328279    3939 logs.go:123] Gathering logs for kube-apiserver [b25765a6d551] ...
	I0827 15:34:53.328291    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25765a6d551"
	I0827 15:34:53.345603    3939 logs.go:123] Gathering logs for coredns [2add41d01d7d] ...
	I0827 15:34:53.345617    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2add41d01d7d"
	I0827 15:34:53.358218    3939 logs.go:123] Gathering logs for storage-provisioner [a2049b8bd96d] ...
	I0827 15:34:53.358231    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2049b8bd96d"
	I0827 15:34:53.369979    3939 logs.go:123] Gathering logs for Docker ...
	I0827 15:34:53.369992    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0827 15:34:53.394412    3939 logs.go:123] Gathering logs for etcd [82641749cd0c] ...
	I0827 15:34:53.394419    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82641749cd0c"
	I0827 15:34:53.408869    3939 logs.go:123] Gathering logs for kube-controller-manager [5c269e7d8105] ...
	I0827 15:34:53.408880    3939 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c269e7d8105"
	I0827 15:34:53.427432    3939 logs.go:123] Gathering logs for container status ...
	I0827 15:34:53.427443    3939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 15:34:55.941477    3939 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0827 15:35:00.943516    3939 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0827 15:35:00.948894    3939 out.go:201] 
	W0827 15:35:00.952872    3939 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0827 15:35:00.952878    3939 out.go:270] * 
	* 
	W0827 15:35:00.953371    3939 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:00.968786    3939 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-443000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.94s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-587000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-587000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.890011625s)

                                                
                                                
-- stdout --
	* [pause-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-587000" primary control-plane node in "pause-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-587000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-587000 -n pause-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-587000 -n pause-587000: exit status 7 (33.683125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-070000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-070000 --driver=qemu2 : exit status 80 (9.902123292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-070000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-070000" primary control-plane node in "NoKubernetes-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-070000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000: exit status 7 (55.107041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240321125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-070000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-070000
	* Restarting existing qemu2 VM for "NoKubernetes-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000: exit status 7 (59.754583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244547625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-070000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-070000
	* Restarting existing qemu2 VM for "NoKubernetes-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000: exit status 7 (62.694041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-070000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-070000 --driver=qemu2 : exit status 80 (5.307554292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-070000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-070000
	* Restarting existing qemu2 VM for "NoKubernetes-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-070000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-070000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-070000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-070000 -n NoKubernetes-070000: exit status 7 (43.815417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.982663167s)

                                                
                                                
-- stdout --
	* [auto-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-554000" primary control-plane node in "auto-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:33:19.299463    4424 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:33:19.299597    4424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:19.299600    4424 out.go:358] Setting ErrFile to fd 2...
	I0827 15:33:19.299602    4424 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:19.299734    4424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:33:19.300802    4424 out.go:352] Setting JSON to false
	I0827 15:33:19.318129    4424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3764,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:33:19.318226    4424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:33:19.323502    4424 out.go:177] * [auto-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:33:19.327410    4424 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:33:19.327443    4424 notify.go:220] Checking for updates...
	I0827 15:33:19.335248    4424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:33:19.338329    4424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:33:19.342305    4424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:33:19.346304    4424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:33:19.349343    4424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:33:19.352706    4424 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:33:19.352773    4424 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:33:19.352819    4424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:33:19.356305    4424 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:33:19.363278    4424 start.go:297] selected driver: qemu2
	I0827 15:33:19.363285    4424 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:33:19.363293    4424 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:33:19.365431    4424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:33:19.369269    4424 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:33:19.372419    4424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:33:19.372449    4424 cni.go:84] Creating CNI manager for ""
	I0827 15:33:19.372455    4424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:33:19.372459    4424 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:33:19.372489    4424 start.go:340] cluster config:
	{Name:auto-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:33:19.375869    4424 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:33:19.384316    4424 out.go:177] * Starting "auto-554000" primary control-plane node in "auto-554000" cluster
	I0827 15:33:19.387323    4424 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:33:19.387338    4424 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:33:19.387350    4424 cache.go:56] Caching tarball of preloaded images
	I0827 15:33:19.387416    4424 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:33:19.387421    4424 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:33:19.387479    4424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/auto-554000/config.json ...
	I0827 15:33:19.387489    4424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/auto-554000/config.json: {Name:mkaba1f40c03f48b276f1376776c02b58fbb3e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:33:19.387800    4424 start.go:360] acquireMachinesLock for auto-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:19.387835    4424 start.go:364] duration metric: took 29.834µs to acquireMachinesLock for "auto-554000"
	I0827 15:33:19.387845    4424 start.go:93] Provisioning new machine with config: &{Name:auto-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:19.387880    4424 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:19.395294    4424 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:19.411345    4424 start.go:159] libmachine.API.Create for "auto-554000" (driver="qemu2")
	I0827 15:33:19.411383    4424 client.go:168] LocalClient.Create starting
	I0827 15:33:19.411461    4424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:19.411492    4424 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:19.411502    4424 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:19.411546    4424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:19.411570    4424 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:19.411579    4424 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:19.412027    4424 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:19.569145    4424 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:19.787544    4424 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:19.787557    4424 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:19.787818    4424 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2
	I0827 15:33:19.797993    4424 main.go:141] libmachine: STDOUT: 
	I0827 15:33:19.798012    4424 main.go:141] libmachine: STDERR: 
	I0827 15:33:19.798065    4424 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2 +20000M
	I0827 15:33:19.809413    4424 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:19.809433    4424 main.go:141] libmachine: STDERR: 
	I0827 15:33:19.809447    4424 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2
	I0827 15:33:19.809452    4424 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:19.809460    4424 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:19.809488    4424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:a2:11:a1:27:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2
	I0827 15:33:19.811209    4424 main.go:141] libmachine: STDOUT: 
	I0827 15:33:19.811225    4424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:19.811244    4424 client.go:171] duration metric: took 399.86925ms to LocalClient.Create
	I0827 15:33:21.813354    4424 start.go:128] duration metric: took 2.425532292s to createHost
	I0827 15:33:21.813414    4424 start.go:83] releasing machines lock for "auto-554000", held for 2.425651458s
	W0827 15:33:21.813511    4424 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:21.826416    4424 out.go:177] * Deleting "auto-554000" in qemu2 ...
	W0827 15:33:21.847425    4424 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:21.847447    4424 start.go:729] Will try again in 5 seconds ...
	I0827 15:33:26.849420    4424 start.go:360] acquireMachinesLock for auto-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:26.849572    4424 start.go:364] duration metric: took 121.666µs to acquireMachinesLock for "auto-554000"
	I0827 15:33:26.849592    4424 start.go:93] Provisioning new machine with config: &{Name:auto-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:26.849678    4424 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:26.856961    4424 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:26.872661    4424 start.go:159] libmachine.API.Create for "auto-554000" (driver="qemu2")
	I0827 15:33:26.872682    4424 client.go:168] LocalClient.Create starting
	I0827 15:33:26.872744    4424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:26.872786    4424 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:26.872796    4424 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:26.872828    4424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:26.872851    4424 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:26.872858    4424 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:26.873135    4424 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:27.027614    4424 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:27.189748    4424 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:27.189755    4424 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:27.189997    4424 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2
	I0827 15:33:27.199768    4424 main.go:141] libmachine: STDOUT: 
	I0827 15:33:27.199788    4424 main.go:141] libmachine: STDERR: 
	I0827 15:33:27.199841    4424 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2 +20000M
	I0827 15:33:27.207940    4424 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:27.207958    4424 main.go:141] libmachine: STDERR: 
	I0827 15:33:27.207971    4424 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2
	I0827 15:33:27.207975    4424 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:27.207981    4424 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:27.208008    4424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1a:e7:41:a9:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/auto-554000/disk.qcow2
	I0827 15:33:27.209726    4424 main.go:141] libmachine: STDOUT: 
	I0827 15:33:27.209745    4424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:27.209758    4424 client.go:171] duration metric: took 337.084208ms to LocalClient.Create
	I0827 15:33:29.211898    4424 start.go:128] duration metric: took 2.362258042s to createHost
	I0827 15:33:29.211972    4424 start.go:83] releasing machines lock for "auto-554000", held for 2.362468625s
	W0827 15:33:29.212388    4424 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:29.226033    4424 out.go:201] 
	W0827 15:33:29.230110    4424 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:33:29.230136    4424 out.go:270] * 
	* 
	W0827 15:33:29.232063    4424 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:33:29.241005    4424 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.981528375s)

                                                
                                                
-- stdout --
	* [flannel-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-554000" primary control-plane node in "flannel-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:33:31.400870    4533 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:33:31.400981    4533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:31.400985    4533 out.go:358] Setting ErrFile to fd 2...
	I0827 15:33:31.400988    4533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:31.401107    4533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:33:31.402183    4533 out.go:352] Setting JSON to false
	I0827 15:33:31.419051    4533 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3776,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:33:31.419146    4533 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:33:31.426821    4533 out.go:177] * [flannel-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:33:31.434599    4533 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:33:31.434626    4533 notify.go:220] Checking for updates...
	I0827 15:33:31.441506    4533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:33:31.444573    4533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:33:31.448532    4533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:33:31.451565    4533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:33:31.454562    4533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:33:31.457921    4533 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:33:31.457989    4533 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:33:31.458042    4533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:33:31.462562    4533 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:33:31.468481    4533 start.go:297] selected driver: qemu2
	I0827 15:33:31.468487    4533 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:33:31.468494    4533 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:33:31.470869    4533 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:33:31.474536    4533 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:33:31.477717    4533 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:33:31.477735    4533 cni.go:84] Creating CNI manager for "flannel"
	I0827 15:33:31.477740    4533 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0827 15:33:31.477763    4533 start.go:340] cluster config:
	{Name:flannel-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:33:31.481765    4533 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:33:31.490513    4533 out.go:177] * Starting "flannel-554000" primary control-plane node in "flannel-554000" cluster
	I0827 15:33:31.494398    4533 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:33:31.494420    4533 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:33:31.494432    4533 cache.go:56] Caching tarball of preloaded images
	I0827 15:33:31.494505    4533 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:33:31.494513    4533 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:33:31.494571    4533 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/flannel-554000/config.json ...
	I0827 15:33:31.494583    4533 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/flannel-554000/config.json: {Name:mkbb97e93b4241a8047d2eabba5711a63fe4ea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:33:31.494935    4533 start.go:360] acquireMachinesLock for flannel-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:31.494971    4533 start.go:364] duration metric: took 31.25µs to acquireMachinesLock for "flannel-554000"
	I0827 15:33:31.494982    4533 start.go:93] Provisioning new machine with config: &{Name:flannel-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:31.495018    4533 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:31.502566    4533 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:31.518300    4533 start.go:159] libmachine.API.Create for "flannel-554000" (driver="qemu2")
	I0827 15:33:31.518324    4533 client.go:168] LocalClient.Create starting
	I0827 15:33:31.518403    4533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:31.518449    4533 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:31.518466    4533 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:31.518506    4533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:31.518529    4533 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:31.518542    4533 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:31.518945    4533 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:31.676369    4533 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:31.863449    4533 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:31.863459    4533 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:31.863707    4533 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2
	I0827 15:33:31.873594    4533 main.go:141] libmachine: STDOUT: 
	I0827 15:33:31.873616    4533 main.go:141] libmachine: STDERR: 
	I0827 15:33:31.873673    4533 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2 +20000M
	I0827 15:33:31.881822    4533 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:31.881848    4533 main.go:141] libmachine: STDERR: 
	I0827 15:33:31.881861    4533 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2
	I0827 15:33:31.881866    4533 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:31.881878    4533 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:31.881907    4533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:ee:da:2a:80:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2
	I0827 15:33:31.883552    4533 main.go:141] libmachine: STDOUT: 
	I0827 15:33:31.883568    4533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:31.883588    4533 client.go:171] duration metric: took 365.271667ms to LocalClient.Create
	I0827 15:33:33.885714    4533 start.go:128] duration metric: took 2.390744208s to createHost
	I0827 15:33:33.885779    4533 start.go:83] releasing machines lock for "flannel-554000", held for 2.3908775s
	W0827 15:33:33.885908    4533 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:33.901322    4533 out.go:177] * Deleting "flannel-554000" in qemu2 ...
	W0827 15:33:33.929691    4533 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:33.929719    4533 start.go:729] Will try again in 5 seconds ...
	I0827 15:33:38.931730    4533 start.go:360] acquireMachinesLock for flannel-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:38.932267    4533 start.go:364] duration metric: took 410.25µs to acquireMachinesLock for "flannel-554000"
	I0827 15:33:38.932412    4533 start.go:93] Provisioning new machine with config: &{Name:flannel-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:38.932621    4533 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:38.937101    4533 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:38.979096    4533 start.go:159] libmachine.API.Create for "flannel-554000" (driver="qemu2")
	I0827 15:33:38.979144    4533 client.go:168] LocalClient.Create starting
	I0827 15:33:38.979262    4533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:38.979319    4533 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:38.979338    4533 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:38.979396    4533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:38.979439    4533 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:38.979457    4533 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:38.980001    4533 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:39.144011    4533 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:39.291713    4533 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:39.291721    4533 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:39.291977    4533 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2
	I0827 15:33:39.301727    4533 main.go:141] libmachine: STDOUT: 
	I0827 15:33:39.301747    4533 main.go:141] libmachine: STDERR: 
	I0827 15:33:39.301812    4533 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2 +20000M
	I0827 15:33:39.310325    4533 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:39.310358    4533 main.go:141] libmachine: STDERR: 
	I0827 15:33:39.310373    4533 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2
	I0827 15:33:39.310376    4533 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:39.310384    4533 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:39.310418    4533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a9:11:b7:a6:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/flannel-554000/disk.qcow2
	I0827 15:33:39.312237    4533 main.go:141] libmachine: STDOUT: 
	I0827 15:33:39.312254    4533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:39.312267    4533 client.go:171] duration metric: took 333.130667ms to LocalClient.Create
	I0827 15:33:41.314388    4533 start.go:128] duration metric: took 2.381816625s to createHost
	I0827 15:33:41.314459    4533 start.go:83] releasing machines lock for "flannel-554000", held for 2.382229417s
	W0827 15:33:41.314758    4533 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:41.323258    4533 out.go:201] 
	W0827 15:33:41.329316    4533 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:33:41.329330    4533 out.go:270] * 
	* 
	W0827 15:33:41.330722    4533 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:33:41.341306    4533 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.810111875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-554000" primary control-plane node in "enable-default-cni-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:33:43.703532    4653 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:33:43.703667    4653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:43.703670    4653 out.go:358] Setting ErrFile to fd 2...
	I0827 15:33:43.703675    4653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:43.703818    4653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:33:43.704900    4653 out.go:352] Setting JSON to false
	I0827 15:33:43.721315    4653 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3788,"bootTime":1724794235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:33:43.721384    4653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:33:43.727527    4653 out.go:177] * [enable-default-cni-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:33:43.735536    4653 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:33:43.735579    4653 notify.go:220] Checking for updates...
	I0827 15:33:43.742464    4653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:33:43.745467    4653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:33:43.748391    4653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:33:43.751414    4653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:33:43.754502    4653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:33:43.757776    4653 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:33:43.757848    4653 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:33:43.757916    4653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:33:43.762405    4653 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:33:43.769435    4653 start.go:297] selected driver: qemu2
	I0827 15:33:43.769441    4653 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:33:43.769447    4653 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:33:43.771817    4653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:33:43.776431    4653 out.go:177] * Automatically selected the socket_vmnet network
	E0827 15:33:43.779534    4653 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0827 15:33:43.779546    4653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:33:43.779576    4653 cni.go:84] Creating CNI manager for "bridge"
	I0827 15:33:43.779583    4653 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:33:43.779608    4653 start.go:340] cluster config:
	{Name:enable-default-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:33:43.783502    4653 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:33:43.792437    4653 out.go:177] * Starting "enable-default-cni-554000" primary control-plane node in "enable-default-cni-554000" cluster
	I0827 15:33:43.795401    4653 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:33:43.795421    4653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:33:43.795432    4653 cache.go:56] Caching tarball of preloaded images
	I0827 15:33:43.795522    4653 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:33:43.795533    4653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:33:43.795593    4653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/enable-default-cni-554000/config.json ...
	I0827 15:33:43.795605    4653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/enable-default-cni-554000/config.json: {Name:mk3f13f126584d5227b8f0c9e3b90de473c9c4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:33:43.795890    4653 start.go:360] acquireMachinesLock for enable-default-cni-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:43.795927    4653 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "enable-default-cni-554000"
	I0827 15:33:43.795940    4653 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:43.795969    4653 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:43.803380    4653 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:43.819169    4653 start.go:159] libmachine.API.Create for "enable-default-cni-554000" (driver="qemu2")
	I0827 15:33:43.819207    4653 client.go:168] LocalClient.Create starting
	I0827 15:33:43.819265    4653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:43.819304    4653 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:43.819313    4653 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:43.819349    4653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:43.819372    4653 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:43.819378    4653 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:43.819746    4653 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:43.974781    4653 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:44.077270    4653 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:44.077278    4653 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:44.077500    4653 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2
	I0827 15:33:44.086768    4653 main.go:141] libmachine: STDOUT: 
	I0827 15:33:44.086788    4653 main.go:141] libmachine: STDERR: 
	I0827 15:33:44.086841    4653 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2 +20000M
	I0827 15:33:44.094899    4653 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:44.094924    4653 main.go:141] libmachine: STDERR: 
	I0827 15:33:44.094940    4653 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2
	I0827 15:33:44.094945    4653 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:44.094955    4653 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:44.094993    4653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b5:44:4f:93:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2
	I0827 15:33:44.096696    4653 main.go:141] libmachine: STDOUT: 
	I0827 15:33:44.096713    4653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:44.096731    4653 client.go:171] duration metric: took 277.52875ms to LocalClient.Create
	I0827 15:33:46.098869    4653 start.go:128] duration metric: took 2.302951375s to createHost
	I0827 15:33:46.098930    4653 start.go:83] releasing machines lock for "enable-default-cni-554000", held for 2.303069208s
	W0827 15:33:46.099039    4653 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:46.113296    4653 out.go:177] * Deleting "enable-default-cni-554000" in qemu2 ...
	W0827 15:33:46.141216    4653 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:46.141236    4653 start.go:729] Will try again in 5 seconds ...
	I0827 15:33:51.143447    4653 start.go:360] acquireMachinesLock for enable-default-cni-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:51.144062    4653 start.go:364] duration metric: took 500.917µs to acquireMachinesLock for "enable-default-cni-554000"
	I0827 15:33:51.144139    4653 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:51.144376    4653 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:51.155104    4653 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:51.203067    4653 start.go:159] libmachine.API.Create for "enable-default-cni-554000" (driver="qemu2")
	I0827 15:33:51.203143    4653 client.go:168] LocalClient.Create starting
	I0827 15:33:51.203280    4653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:51.203344    4653 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:51.203359    4653 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:51.203421    4653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:51.203468    4653 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:51.203483    4653 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:51.204143    4653 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:51.368555    4653 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:51.419489    4653 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:51.419498    4653 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:51.419711    4653 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2
	I0827 15:33:51.429057    4653 main.go:141] libmachine: STDOUT: 
	I0827 15:33:51.429076    4653 main.go:141] libmachine: STDERR: 
	I0827 15:33:51.429125    4653 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2 +20000M
	I0827 15:33:51.437278    4653 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:51.437296    4653 main.go:141] libmachine: STDERR: 
	I0827 15:33:51.437307    4653 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2
	I0827 15:33:51.437311    4653 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:51.437323    4653 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:51.437362    4653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:14:b3:15:67:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/enable-default-cni-554000/disk.qcow2
	I0827 15:33:51.439049    4653 main.go:141] libmachine: STDOUT: 
	I0827 15:33:51.439067    4653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:51.439080    4653 client.go:171] duration metric: took 235.939875ms to LocalClient.Create
	I0827 15:33:53.441228    4653 start.go:128] duration metric: took 2.296850958s to createHost
	I0827 15:33:53.441306    4653 start.go:83] releasing machines lock for "enable-default-cni-554000", held for 2.297293667s
	W0827 15:33:53.441690    4653 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:53.457384    4653 out.go:201] 
	W0827 15:33:53.460417    4653 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:33:53.460478    4653 out.go:270] * 
	* 
	W0827 15:33:53.462962    4653 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:33:53.472413    4653 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.791463125s)

                                                
                                                
-- stdout --
	* [kindnet-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-554000" primary control-plane node in "kindnet-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:33:55.669766    4762 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:33:55.669896    4762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:55.669900    4762 out.go:358] Setting ErrFile to fd 2...
	I0827 15:33:55.669902    4762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:33:55.670054    4762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:33:55.671206    4762 out.go:352] Setting JSON to false
	I0827 15:33:55.687608    4762 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3800,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:33:55.687679    4762 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:33:55.694357    4762 out.go:177] * [kindnet-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:33:55.702136    4762 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:33:55.702222    4762 notify.go:220] Checking for updates...
	I0827 15:33:55.710080    4762 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:33:55.713112    4762 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:33:55.716136    4762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:33:55.719075    4762 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:33:55.722102    4762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:33:55.725416    4762 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:33:55.725484    4762 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:33:55.725534    4762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:33:55.729128    4762 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:33:55.736094    4762 start.go:297] selected driver: qemu2
	I0827 15:33:55.736099    4762 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:33:55.736104    4762 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:33:55.738343    4762 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:33:55.742099    4762 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:33:55.745208    4762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:33:55.745245    4762 cni.go:84] Creating CNI manager for "kindnet"
	I0827 15:33:55.745255    4762 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 15:33:55.745296    4762 start.go:340] cluster config:
	{Name:kindnet-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:33:55.749058    4762 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:33:55.758093    4762 out.go:177] * Starting "kindnet-554000" primary control-plane node in "kindnet-554000" cluster
	I0827 15:33:55.762099    4762 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:33:55.762120    4762 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:33:55.762131    4762 cache.go:56] Caching tarball of preloaded images
	I0827 15:33:55.762204    4762 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:33:55.762211    4762 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:33:55.762295    4762 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kindnet-554000/config.json ...
	I0827 15:33:55.762307    4762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kindnet-554000/config.json: {Name:mk440d488528dc004b7cadbd21e4ee98ba2f01a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:33:55.762528    4762 start.go:360] acquireMachinesLock for kindnet-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:33:55.762563    4762 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "kindnet-554000"
	I0827 15:33:55.762573    4762 start.go:93] Provisioning new machine with config: &{Name:kindnet-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:33:55.762610    4762 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:33:55.777290    4762 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:33:55.795812    4762 start.go:159] libmachine.API.Create for "kindnet-554000" (driver="qemu2")
	I0827 15:33:55.795859    4762 client.go:168] LocalClient.Create starting
	I0827 15:33:55.795939    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:33:55.795974    4762 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:55.795991    4762 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:55.796036    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:33:55.796059    4762 main.go:141] libmachine: Decoding PEM data...
	I0827 15:33:55.796070    4762 main.go:141] libmachine: Parsing certificate...
	I0827 15:33:55.796432    4762 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:33:55.950142    4762 main.go:141] libmachine: Creating SSH key...
	I0827 15:33:56.068803    4762 main.go:141] libmachine: Creating Disk image...
	I0827 15:33:56.068813    4762 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:33:56.069052    4762 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2
	I0827 15:33:56.078427    4762 main.go:141] libmachine: STDOUT: 
	I0827 15:33:56.078444    4762 main.go:141] libmachine: STDERR: 
	I0827 15:33:56.078508    4762 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2 +20000M
	I0827 15:33:56.086351    4762 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:33:56.086365    4762 main.go:141] libmachine: STDERR: 
	I0827 15:33:56.086378    4762 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2
	I0827 15:33:56.086385    4762 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:33:56.086398    4762 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:33:56.086430    4762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:42:15:d1:97:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2
	I0827 15:33:56.088124    4762 main.go:141] libmachine: STDOUT: 
	I0827 15:33:56.088139    4762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:33:56.088155    4762 client.go:171] duration metric: took 292.302041ms to LocalClient.Create
	I0827 15:33:58.089086    4762 start.go:128] duration metric: took 2.326547292s to createHost
	I0827 15:33:58.089098    4762 start.go:83] releasing machines lock for "kindnet-554000", held for 2.326607708s
	W0827 15:33:58.089131    4762 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:58.102501    4762 out.go:177] * Deleting "kindnet-554000" in qemu2 ...
	W0827 15:33:58.115966    4762 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:33:58.115973    4762 start.go:729] Will try again in 5 seconds ...
	I0827 15:34:03.116128    4762 start.go:360] acquireMachinesLock for kindnet-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:03.116468    4762 start.go:364] duration metric: took 289.084µs to acquireMachinesLock for "kindnet-554000"
	I0827 15:34:03.116553    4762 start.go:93] Provisioning new machine with config: &{Name:kindnet-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:03.116729    4762 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:03.127226    4762 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:03.167139    4762 start.go:159] libmachine.API.Create for "kindnet-554000" (driver="qemu2")
	I0827 15:34:03.167191    4762 client.go:168] LocalClient.Create starting
	I0827 15:34:03.167307    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:03.167373    4762 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:03.167390    4762 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:03.167451    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:03.167491    4762 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:03.167502    4762 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:03.168051    4762 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:03.330558    4762 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:03.368273    4762 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:03.368279    4762 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:03.368505    4762 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2
	I0827 15:34:03.377976    4762 main.go:141] libmachine: STDOUT: 
	I0827 15:34:03.377992    4762 main.go:141] libmachine: STDERR: 
	I0827 15:34:03.378054    4762 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2 +20000M
	I0827 15:34:03.386758    4762 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:03.386776    4762 main.go:141] libmachine: STDERR: 
	I0827 15:34:03.386804    4762 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2
	I0827 15:34:03.386808    4762 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:03.386825    4762 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:03.386850    4762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:b4:06:06:31:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kindnet-554000/disk.qcow2
	I0827 15:34:03.388675    4762 main.go:141] libmachine: STDOUT: 
	I0827 15:34:03.388691    4762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:03.388705    4762 client.go:171] duration metric: took 221.51625ms to LocalClient.Create
	I0827 15:34:05.390846    4762 start.go:128] duration metric: took 2.274152167s to createHost
	I0827 15:34:05.390933    4762 start.go:83] releasing machines lock for "kindnet-554000", held for 2.274521291s
	W0827 15:34:05.391413    4762 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:05.399878    4762 out.go:201] 
	W0827 15:34:05.407116    4762 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:34:05.407143    4762 out.go:270] * 
	* 
	W0827 15:34:05.409771    4762 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:34:05.420025    4762 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.815938666s)

                                                
                                                
-- stdout --
	* [bridge-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-554000" primary control-plane node in "bridge-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:34:07.748534    4880 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:34:07.748646    4880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:07.748650    4880 out.go:358] Setting ErrFile to fd 2...
	I0827 15:34:07.748652    4880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:07.748796    4880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:34:07.749871    4880 out.go:352] Setting JSON to false
	I0827 15:34:07.766284    4880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3812,"bootTime":1724794235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:34:07.766361    4880 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:34:07.773297    4880 out.go:177] * [bridge-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:34:07.781225    4880 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:34:07.781259    4880 notify.go:220] Checking for updates...
	I0827 15:34:07.788296    4880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:34:07.791287    4880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:34:07.794267    4880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:34:07.797292    4880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:34:07.800215    4880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:34:07.803513    4880 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:34:07.803580    4880 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:34:07.803631    4880 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:34:07.808215    4880 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:34:07.815260    4880 start.go:297] selected driver: qemu2
	I0827 15:34:07.815266    4880 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:34:07.815272    4880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:34:07.817433    4880 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:34:07.821214    4880 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:34:07.824278    4880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:34:07.824296    4880 cni.go:84] Creating CNI manager for "bridge"
	I0827 15:34:07.824305    4880 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:34:07.824331    4880 start.go:340] cluster config:
	{Name:bridge-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:34:07.827765    4880 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:34:07.835093    4880 out.go:177] * Starting "bridge-554000" primary control-plane node in "bridge-554000" cluster
	I0827 15:34:07.839247    4880 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:34:07.839263    4880 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:34:07.839272    4880 cache.go:56] Caching tarball of preloaded images
	I0827 15:34:07.839333    4880 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:34:07.839338    4880 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:34:07.839402    4880 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/bridge-554000/config.json ...
	I0827 15:34:07.839413    4880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/bridge-554000/config.json: {Name:mk78234eb801d4a391d37ec9e55c3c2a52edac7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:34:07.839631    4880 start.go:360] acquireMachinesLock for bridge-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:07.839662    4880 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "bridge-554000"
	I0827 15:34:07.839673    4880 start.go:93] Provisioning new machine with config: &{Name:bridge-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:07.839700    4880 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:07.849200    4880 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:07.864609    4880 start.go:159] libmachine.API.Create for "bridge-554000" (driver="qemu2")
	I0827 15:34:07.864639    4880 client.go:168] LocalClient.Create starting
	I0827 15:34:07.864707    4880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:07.864739    4880 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:07.864752    4880 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:07.864795    4880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:07.864820    4880 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:07.864829    4880 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:07.865162    4880 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:08.018672    4880 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:08.100140    4880 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:08.100145    4880 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:08.100380    4880 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2
	I0827 15:34:08.109947    4880 main.go:141] libmachine: STDOUT: 
	I0827 15:34:08.109965    4880 main.go:141] libmachine: STDERR: 
	I0827 15:34:08.110020    4880 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2 +20000M
	I0827 15:34:08.118125    4880 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:08.118139    4880 main.go:141] libmachine: STDERR: 
	I0827 15:34:08.118152    4880 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2
	I0827 15:34:08.118157    4880 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:08.118172    4880 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:08.118199    4880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b0:64:70:da:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2
	I0827 15:34:08.119861    4880 main.go:141] libmachine: STDOUT: 
	I0827 15:34:08.119880    4880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:08.119900    4880 client.go:171] duration metric: took 255.263292ms to LocalClient.Create
	I0827 15:34:10.122156    4880 start.go:128] duration metric: took 2.282493792s to createHost
	I0827 15:34:10.122273    4880 start.go:83] releasing machines lock for "bridge-554000", held for 2.282676167s
	W0827 15:34:10.122356    4880 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:10.132359    4880 out.go:177] * Deleting "bridge-554000" in qemu2 ...
	W0827 15:34:10.161158    4880 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:10.161194    4880 start.go:729] Will try again in 5 seconds ...
	I0827 15:34:15.163261    4880 start.go:360] acquireMachinesLock for bridge-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:15.163649    4880 start.go:364] duration metric: took 308.459µs to acquireMachinesLock for "bridge-554000"
	I0827 15:34:15.163776    4880 start.go:93] Provisioning new machine with config: &{Name:bridge-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:15.164023    4880 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:15.174668    4880 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:15.213340    4880 start.go:159] libmachine.API.Create for "bridge-554000" (driver="qemu2")
	I0827 15:34:15.213390    4880 client.go:168] LocalClient.Create starting
	I0827 15:34:15.213495    4880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:15.213564    4880 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:15.213580    4880 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:15.213633    4880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:15.213672    4880 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:15.213683    4880 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:15.214173    4880 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:15.377504    4880 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:15.469171    4880 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:15.469177    4880 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:15.469406    4880 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2
	I0827 15:34:15.479416    4880 main.go:141] libmachine: STDOUT: 
	I0827 15:34:15.479449    4880 main.go:141] libmachine: STDERR: 
	I0827 15:34:15.479536    4880 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2 +20000M
	I0827 15:34:15.488158    4880 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:15.488183    4880 main.go:141] libmachine: STDERR: 
	I0827 15:34:15.488199    4880 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2
	I0827 15:34:15.488204    4880 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:15.488216    4880 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:15.488248    4880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:90:82:48:ce:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/bridge-554000/disk.qcow2
	I0827 15:34:15.489998    4880 main.go:141] libmachine: STDOUT: 
	I0827 15:34:15.490019    4880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:15.490032    4880 client.go:171] duration metric: took 276.645208ms to LocalClient.Create
	I0827 15:34:17.492141    4880 start.go:128] duration metric: took 2.32814275s to createHost
	I0827 15:34:17.492208    4880 start.go:83] releasing machines lock for "bridge-554000", held for 2.328619792s
	W0827 15:34:17.492469    4880 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:17.503208    4880 out.go:201] 
	W0827 15:34:17.507125    4880 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:34:17.507177    4880 out.go:270] * 
	* 
	W0827 15:34:17.509004    4880 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:34:17.524026    4880 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0827 15:34:24.499827    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.845186833s)

                                                
                                                
-- stdout --
	* [kubenet-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-554000" primary control-plane node in "kubenet-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:34:19.719163    4992 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:34:19.719297    4992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:19.719301    4992 out.go:358] Setting ErrFile to fd 2...
	I0827 15:34:19.719303    4992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:19.719448    4992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:34:19.720585    4992 out.go:352] Setting JSON to false
	I0827 15:34:19.737249    4992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3824,"bootTime":1724794235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:34:19.737316    4992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:34:19.743674    4992 out.go:177] * [kubenet-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:34:19.751595    4992 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:34:19.751644    4992 notify.go:220] Checking for updates...
	I0827 15:34:19.758559    4992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:34:19.761606    4992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:34:19.765544    4992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:34:19.768576    4992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:34:19.771651    4992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:34:19.774886    4992 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:34:19.774954    4992 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:34:19.775002    4992 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:34:19.779568    4992 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:34:19.786533    4992 start.go:297] selected driver: qemu2
	I0827 15:34:19.786540    4992 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:34:19.786546    4992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:34:19.788865    4992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:34:19.792569    4992 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:34:19.795647    4992 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:34:19.795678    4992 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0827 15:34:19.795718    4992 start.go:340] cluster config:
	{Name:kubenet-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:34:19.799423    4992 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:34:19.807514    4992 out.go:177] * Starting "kubenet-554000" primary control-plane node in "kubenet-554000" cluster
	I0827 15:34:19.811509    4992 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:34:19.811524    4992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:34:19.811531    4992 cache.go:56] Caching tarball of preloaded images
	I0827 15:34:19.811593    4992 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:34:19.811599    4992 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:34:19.811655    4992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kubenet-554000/config.json ...
	I0827 15:34:19.811666    4992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/kubenet-554000/config.json: {Name:mk18d09b5de2089f7f2adeb0bccaf48c4bbdca40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:34:19.812009    4992 start.go:360] acquireMachinesLock for kubenet-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:19.812047    4992 start.go:364] duration metric: took 32.042µs to acquireMachinesLock for "kubenet-554000"
	I0827 15:34:19.812059    4992 start.go:93] Provisioning new machine with config: &{Name:kubenet-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:19.812095    4992 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:19.819510    4992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:19.834733    4992 start.go:159] libmachine.API.Create for "kubenet-554000" (driver="qemu2")
	I0827 15:34:19.834758    4992 client.go:168] LocalClient.Create starting
	I0827 15:34:19.834824    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:19.834854    4992 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:19.834861    4992 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:19.834901    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:19.834924    4992 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:19.834931    4992 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:19.835260    4992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:19.990200    4992 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:20.148172    4992 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:20.148185    4992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:20.148420    4992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2
	I0827 15:34:20.158228    4992 main.go:141] libmachine: STDOUT: 
	I0827 15:34:20.158251    4992 main.go:141] libmachine: STDERR: 
	I0827 15:34:20.158303    4992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2 +20000M
	I0827 15:34:20.166627    4992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:20.166649    4992 main.go:141] libmachine: STDERR: 
	I0827 15:34:20.166672    4992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2
	I0827 15:34:20.166677    4992 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:20.166688    4992 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:20.166718    4992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:93:a7:f0:de:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2
	I0827 15:34:20.168443    4992 main.go:141] libmachine: STDOUT: 
	I0827 15:34:20.168460    4992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:20.168481    4992 client.go:171] duration metric: took 333.72975ms to LocalClient.Create
	I0827 15:34:22.170614    4992 start.go:128] duration metric: took 2.358566084s to createHost
	I0827 15:34:22.170678    4992 start.go:83] releasing machines lock for "kubenet-554000", held for 2.358701834s
	W0827 15:34:22.170788    4992 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:22.183349    4992 out.go:177] * Deleting "kubenet-554000" in qemu2 ...
	W0827 15:34:22.206156    4992 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:22.206176    4992 start.go:729] Will try again in 5 seconds ...
	I0827 15:34:27.208140    4992 start.go:360] acquireMachinesLock for kubenet-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:27.208426    4992 start.go:364] duration metric: took 223.583µs to acquireMachinesLock for "kubenet-554000"
	I0827 15:34:27.208503    4992 start.go:93] Provisioning new machine with config: &{Name:kubenet-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:27.208651    4992 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:27.217856    4992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:27.247216    4992 start.go:159] libmachine.API.Create for "kubenet-554000" (driver="qemu2")
	I0827 15:34:27.247260    4992 client.go:168] LocalClient.Create starting
	I0827 15:34:27.247358    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:27.247415    4992 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:27.247426    4992 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:27.247490    4992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:27.247526    4992 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:27.247539    4992 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:27.248051    4992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:27.407843    4992 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:27.483985    4992 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:27.483992    4992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:27.484239    4992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2
	I0827 15:34:27.493925    4992 main.go:141] libmachine: STDOUT: 
	I0827 15:34:27.493955    4992 main.go:141] libmachine: STDERR: 
	I0827 15:34:27.494006    4992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2 +20000M
	I0827 15:34:27.502237    4992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:27.502253    4992 main.go:141] libmachine: STDERR: 
	I0827 15:34:27.502273    4992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2
	I0827 15:34:27.502278    4992 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:27.502289    4992 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:27.502314    4992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:56:d0:3b:bd:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/kubenet-554000/disk.qcow2
	I0827 15:34:27.503987    4992 main.go:141] libmachine: STDOUT: 
	I0827 15:34:27.504003    4992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:27.504014    4992 client.go:171] duration metric: took 256.759208ms to LocalClient.Create
	I0827 15:34:29.505909    4992 start.go:128] duration metric: took 2.297324584s to createHost
	I0827 15:34:29.505922    4992 start.go:83] releasing machines lock for "kubenet-554000", held for 2.297562083s
	W0827 15:34:29.506021    4992 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:29.514232    4992 out.go:201] 
	W0827 15:34:29.518245    4992 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:34:29.518256    4992 out.go:270] * 
	* 
	W0827 15:34:29.518716    4992 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:34:29.527199    4992 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.076361708s)

                                                
                                                
-- stdout --
	* [custom-flannel-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-554000" primary control-plane node in "custom-flannel-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:34:31.697707    5101 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:34:31.697838    5101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:31.697844    5101 out.go:358] Setting ErrFile to fd 2...
	I0827 15:34:31.697847    5101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:31.697965    5101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:34:31.699119    5101 out.go:352] Setting JSON to false
	I0827 15:34:31.715761    5101 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3836,"bootTime":1724794235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:34:31.715833    5101 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:34:31.722311    5101 out.go:177] * [custom-flannel-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:34:31.730223    5101 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:34:31.730288    5101 notify.go:220] Checking for updates...
	I0827 15:34:31.734097    5101 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:34:31.737090    5101 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:34:31.740119    5101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:34:31.743178    5101 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:34:31.746087    5101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:34:31.749424    5101 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:34:31.749502    5101 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:34:31.749561    5101 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:34:31.752120    5101 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:34:31.759169    5101 start.go:297] selected driver: qemu2
	I0827 15:34:31.759177    5101 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:34:31.759190    5101 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:34:31.761427    5101 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:34:31.766068    5101 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:34:31.769187    5101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:34:31.769207    5101 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0827 15:34:31.769228    5101 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0827 15:34:31.769253    5101 start.go:340] cluster config:
	{Name:custom-flannel-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:34:31.772732    5101 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:34:31.781104    5101 out.go:177] * Starting "custom-flannel-554000" primary control-plane node in "custom-flannel-554000" cluster
	I0827 15:34:31.785126    5101 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:34:31.785142    5101 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:34:31.785157    5101 cache.go:56] Caching tarball of preloaded images
	I0827 15:34:31.785216    5101 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:34:31.785223    5101 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:34:31.785290    5101 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/custom-flannel-554000/config.json ...
	I0827 15:34:31.785307    5101 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/custom-flannel-554000/config.json: {Name:mk73b558713b26e488196f275000e61ec9f82667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:34:31.785650    5101 start.go:360] acquireMachinesLock for custom-flannel-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:31.785684    5101 start.go:364] duration metric: took 26.834µs to acquireMachinesLock for "custom-flannel-554000"
	I0827 15:34:31.785695    5101 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:31.785723    5101 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:31.794148    5101 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:31.811114    5101 start.go:159] libmachine.API.Create for "custom-flannel-554000" (driver="qemu2")
	I0827 15:34:31.811138    5101 client.go:168] LocalClient.Create starting
	I0827 15:34:31.811204    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:31.811234    5101 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:31.811243    5101 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:31.811277    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:31.811299    5101 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:31.811305    5101 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:31.811766    5101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:31.966816    5101 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:32.180909    5101 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:32.180925    5101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:32.181184    5101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2
	I0827 15:34:32.190778    5101 main.go:141] libmachine: STDOUT: 
	I0827 15:34:32.190795    5101 main.go:141] libmachine: STDERR: 
	I0827 15:34:32.190845    5101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2 +20000M
	I0827 15:34:32.198975    5101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:32.198991    5101 main.go:141] libmachine: STDERR: 
	I0827 15:34:32.199010    5101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2
	I0827 15:34:32.199015    5101 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:32.199028    5101 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:32.199053    5101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2e:8b:36:21:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2
	I0827 15:34:32.200756    5101 main.go:141] libmachine: STDOUT: 
	I0827 15:34:32.200771    5101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:32.200790    5101 client.go:171] duration metric: took 389.65675ms to LocalClient.Create
	I0827 15:34:34.202934    5101 start.go:128] duration metric: took 2.417267333s to createHost
	I0827 15:34:34.202999    5101 start.go:83] releasing machines lock for "custom-flannel-554000", held for 2.417387375s
	W0827 15:34:34.203056    5101 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:34.215524    5101 out.go:177] * Deleting "custom-flannel-554000" in qemu2 ...
	W0827 15:34:34.241475    5101 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:34.241509    5101 start.go:729] Will try again in 5 seconds ...
	I0827 15:34:39.243479    5101 start.go:360] acquireMachinesLock for custom-flannel-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:39.243721    5101 start.go:364] duration metric: took 199µs to acquireMachinesLock for "custom-flannel-554000"
	I0827 15:34:39.243780    5101 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:39.243903    5101 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:39.255145    5101 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:39.284590    5101 start.go:159] libmachine.API.Create for "custom-flannel-554000" (driver="qemu2")
	I0827 15:34:39.284632    5101 client.go:168] LocalClient.Create starting
	I0827 15:34:39.284732    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:39.284782    5101 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:39.284792    5101 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:39.284834    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:39.284869    5101 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:39.284877    5101 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:39.285298    5101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:39.456086    5101 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:39.682322    5101 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:39.682333    5101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:39.682608    5101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2
	I0827 15:34:39.692471    5101 main.go:141] libmachine: STDOUT: 
	I0827 15:34:39.692496    5101 main.go:141] libmachine: STDERR: 
	I0827 15:34:39.692551    5101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2 +20000M
	I0827 15:34:39.700655    5101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:39.700671    5101 main.go:141] libmachine: STDERR: 
	I0827 15:34:39.700690    5101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2
	I0827 15:34:39.700697    5101 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:39.700708    5101 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:39.700739    5101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:ba:0f:d1:82:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/custom-flannel-554000/disk.qcow2
	I0827 15:34:39.702428    5101 main.go:141] libmachine: STDOUT: 
	I0827 15:34:39.702442    5101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:39.702457    5101 client.go:171] duration metric: took 417.833708ms to LocalClient.Create
	I0827 15:34:41.703673    5101 start.go:128] duration metric: took 2.459819209s to createHost
	I0827 15:34:41.703728    5101 start.go:83] releasing machines lock for "custom-flannel-554000", held for 2.460073291s
	W0827 15:34:41.704052    5101 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:41.715553    5101 out.go:201] 
	W0827 15:34:41.719626    5101 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:34:41.719652    5101 out.go:270] * 
	* 
	W0827 15:34:41.722169    5101 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:34:41.731467    5101 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.769127417s)

                                                
                                                
-- stdout --
	* [calico-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-554000" primary control-plane node in "calico-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:34:44.136443    5218 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:34:44.136587    5218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:44.136590    5218 out.go:358] Setting ErrFile to fd 2...
	I0827 15:34:44.136593    5218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:44.136728    5218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:34:44.137792    5218 out.go:352] Setting JSON to false
	I0827 15:34:44.154097    5218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3849,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:34:44.154167    5218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:34:44.160417    5218 out.go:177] * [calico-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:34:44.168313    5218 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:34:44.168387    5218 notify.go:220] Checking for updates...
	I0827 15:34:44.175243    5218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:34:44.178286    5218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:34:44.186250    5218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:34:44.189298    5218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:34:44.192299    5218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:34:44.195610    5218 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:34:44.195675    5218 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:34:44.195727    5218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:34:44.199326    5218 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:34:44.206298    5218 start.go:297] selected driver: qemu2
	I0827 15:34:44.206304    5218 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:34:44.206316    5218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:34:44.208453    5218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:34:44.211263    5218 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:34:44.214372    5218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:34:44.214389    5218 cni.go:84] Creating CNI manager for "calico"
	I0827 15:34:44.214393    5218 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0827 15:34:44.214420    5218 start.go:340] cluster config:
	{Name:calico-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:34:44.217956    5218 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:34:44.226283    5218 out.go:177] * Starting "calico-554000" primary control-plane node in "calico-554000" cluster
	I0827 15:34:44.230238    5218 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:34:44.230254    5218 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:34:44.230259    5218 cache.go:56] Caching tarball of preloaded images
	I0827 15:34:44.230310    5218 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:34:44.230315    5218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:34:44.230364    5218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/calico-554000/config.json ...
	I0827 15:34:44.230374    5218 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/calico-554000/config.json: {Name:mka3286cadee2dc524f3c27485dfba150c624307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:34:44.230722    5218 start.go:360] acquireMachinesLock for calico-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:44.230752    5218 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "calico-554000"
	I0827 15:34:44.230763    5218 start.go:93] Provisioning new machine with config: &{Name:calico-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:44.230791    5218 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:44.235281    5218 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:44.250472    5218 start.go:159] libmachine.API.Create for "calico-554000" (driver="qemu2")
	I0827 15:34:44.250500    5218 client.go:168] LocalClient.Create starting
	I0827 15:34:44.250562    5218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:44.250604    5218 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:44.250614    5218 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:44.250651    5218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:44.250674    5218 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:44.250682    5218 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:44.251096    5218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:44.407859    5218 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:44.491901    5218 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:44.491910    5218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:44.492151    5218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2
	I0827 15:34:44.501351    5218 main.go:141] libmachine: STDOUT: 
	I0827 15:34:44.501371    5218 main.go:141] libmachine: STDERR: 
	I0827 15:34:44.501420    5218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2 +20000M
	I0827 15:34:44.509563    5218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:44.509577    5218 main.go:141] libmachine: STDERR: 
	I0827 15:34:44.509589    5218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2
	I0827 15:34:44.509597    5218 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:44.509614    5218 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:44.509637    5218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:44:e9:a7:9c:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2
	I0827 15:34:44.511269    5218 main.go:141] libmachine: STDOUT: 
	I0827 15:34:44.511290    5218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:44.511309    5218 client.go:171] duration metric: took 260.812458ms to LocalClient.Create
	I0827 15:34:46.513460    5218 start.go:128] duration metric: took 2.282714666s to createHost
	I0827 15:34:46.513518    5218 start.go:83] releasing machines lock for "calico-554000", held for 2.282832375s
	W0827 15:34:46.513636    5218 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:46.520989    5218 out.go:177] * Deleting "calico-554000" in qemu2 ...
	W0827 15:34:46.555688    5218 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:46.555719    5218 start.go:729] Will try again in 5 seconds ...
	I0827 15:34:51.557664    5218 start.go:360] acquireMachinesLock for calico-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:51.558016    5218 start.go:364] duration metric: took 271.583µs to acquireMachinesLock for "calico-554000"
	I0827 15:34:51.558102    5218 start.go:93] Provisioning new machine with config: &{Name:calico-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:51.558211    5218 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:51.568618    5218 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:51.604022    5218 start.go:159] libmachine.API.Create for "calico-554000" (driver="qemu2")
	I0827 15:34:51.604066    5218 client.go:168] LocalClient.Create starting
	I0827 15:34:51.604173    5218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:51.604240    5218 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:51.604255    5218 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:51.604311    5218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:51.604351    5218 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:51.604368    5218 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:51.605000    5218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:51.768391    5218 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:51.817350    5218 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:51.817356    5218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:51.817582    5218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2
	I0827 15:34:51.827121    5218 main.go:141] libmachine: STDOUT: 
	I0827 15:34:51.827138    5218 main.go:141] libmachine: STDERR: 
	I0827 15:34:51.827190    5218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2 +20000M
	I0827 15:34:51.835332    5218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:51.835348    5218 main.go:141] libmachine: STDERR: 
	I0827 15:34:51.835366    5218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2
	I0827 15:34:51.835371    5218 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:51.835381    5218 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:51.835415    5218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:93:5d:20:72:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/calico-554000/disk.qcow2
	I0827 15:34:51.837084    5218 main.go:141] libmachine: STDOUT: 
	I0827 15:34:51.837100    5218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:51.837113    5218 client.go:171] duration metric: took 233.048667ms to LocalClient.Create
	I0827 15:34:53.839150    5218 start.go:128] duration metric: took 2.280998125s to createHost
	I0827 15:34:53.839178    5218 start.go:83] releasing machines lock for "calico-554000", held for 2.281224292s
	W0827 15:34:53.839363    5218 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:53.851861    5218 out.go:201] 
	W0827 15:34:53.855850    5218 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:34:53.855873    5218 out.go:270] * 
	* 
	W0827 15:34:53.856905    5218 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:34:53.866814    5218 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-554000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.866939916s)

                                                
                                                
-- stdout --
	* [false-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-554000" primary control-plane node in "false-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:34:56.301603    5339 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:34:56.301754    5339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:56.301759    5339 out.go:358] Setting ErrFile to fd 2...
	I0827 15:34:56.301761    5339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:34:56.301892    5339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:34:56.303102    5339 out.go:352] Setting JSON to false
	I0827 15:34:56.320582    5339 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3861,"bootTime":1724794235,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:34:56.320661    5339 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:34:56.325217    5339 out.go:177] * [false-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:34:56.333055    5339 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:34:56.333094    5339 notify.go:220] Checking for updates...
	I0827 15:34:56.342065    5339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:34:56.345096    5339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:34:56.348056    5339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:34:56.351077    5339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:34:56.354017    5339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:34:56.357430    5339 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:34:56.357499    5339 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:34:56.357546    5339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:34:56.361088    5339 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:34:56.368052    5339 start.go:297] selected driver: qemu2
	I0827 15:34:56.368057    5339 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:34:56.368065    5339 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:34:56.370233    5339 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:34:56.374034    5339 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:34:56.377122    5339 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:34:56.377158    5339 cni.go:84] Creating CNI manager for "false"
	I0827 15:34:56.377181    5339 start.go:340] cluster config:
	{Name:false-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:34:56.380863    5339 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:34:56.388003    5339 out.go:177] * Starting "false-554000" primary control-plane node in "false-554000" cluster
	I0827 15:34:56.392014    5339 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:34:56.392032    5339 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:34:56.392043    5339 cache.go:56] Caching tarball of preloaded images
	I0827 15:34:56.392103    5339 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:34:56.392108    5339 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:34:56.392181    5339 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/false-554000/config.json ...
	I0827 15:34:56.392200    5339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/false-554000/config.json: {Name:mkcc716f142bab645c577ac3b3af72e2705b1259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:34:56.392535    5339 start.go:360] acquireMachinesLock for false-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:34:56.392568    5339 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "false-554000"
	I0827 15:34:56.392578    5339 start.go:93] Provisioning new machine with config: &{Name:false-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:34:56.392609    5339 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:34:56.396968    5339 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:34:56.413392    5339 start.go:159] libmachine.API.Create for "false-554000" (driver="qemu2")
	I0827 15:34:56.413417    5339 client.go:168] LocalClient.Create starting
	I0827 15:34:56.413472    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:34:56.413512    5339 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:56.413524    5339 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:56.413558    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:34:56.413580    5339 main.go:141] libmachine: Decoding PEM data...
	I0827 15:34:56.413589    5339 main.go:141] libmachine: Parsing certificate...
	I0827 15:34:56.414074    5339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:34:56.571461    5339 main.go:141] libmachine: Creating SSH key...
	I0827 15:34:56.705888    5339 main.go:141] libmachine: Creating Disk image...
	I0827 15:34:56.705895    5339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:34:56.706121    5339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2
	I0827 15:34:56.715659    5339 main.go:141] libmachine: STDOUT: 
	I0827 15:34:56.715680    5339 main.go:141] libmachine: STDERR: 
	I0827 15:34:56.715738    5339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2 +20000M
	I0827 15:34:56.723939    5339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:34:56.723956    5339 main.go:141] libmachine: STDERR: 
	I0827 15:34:56.723969    5339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2
	I0827 15:34:56.723974    5339 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:34:56.724005    5339 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:34:56.724030    5339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:f5:58:aa:3a:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2
	I0827 15:34:56.725753    5339 main.go:141] libmachine: STDOUT: 
	I0827 15:34:56.725769    5339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:34:56.725785    5339 client.go:171] duration metric: took 312.374458ms to LocalClient.Create
	I0827 15:34:58.728034    5339 start.go:128] duration metric: took 2.335470458s to createHost
	I0827 15:34:58.728126    5339 start.go:83] releasing machines lock for "false-554000", held for 2.335624917s
	W0827 15:34:58.728203    5339 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:58.746611    5339 out.go:177] * Deleting "false-554000" in qemu2 ...
	W0827 15:34:58.772628    5339 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:34:58.772653    5339 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:03.774761    5339 start.go:360] acquireMachinesLock for false-554000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:03.775478    5339 start.go:364] duration metric: took 572.833µs to acquireMachinesLock for "false-554000"
	I0827 15:35:03.775634    5339 start.go:93] Provisioning new machine with config: &{Name:false-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:03.776037    5339 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:03.785728    5339 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0827 15:35:03.839109    5339 start.go:159] libmachine.API.Create for "false-554000" (driver="qemu2")
	I0827 15:35:03.839187    5339 client.go:168] LocalClient.Create starting
	I0827 15:35:03.839347    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:03.839426    5339 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:03.839444    5339 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:03.839511    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:03.839557    5339 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:03.839570    5339 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:03.840116    5339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:04.009982    5339 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:04.074774    5339 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:04.074780    5339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:04.075039    5339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2
	I0827 15:35:04.084396    5339 main.go:141] libmachine: STDOUT: 
	I0827 15:35:04.084417    5339 main.go:141] libmachine: STDERR: 
	I0827 15:35:04.084461    5339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2 +20000M
	I0827 15:35:04.092473    5339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:04.092490    5339 main.go:141] libmachine: STDERR: 
	I0827 15:35:04.092500    5339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2
	I0827 15:35:04.092512    5339 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:04.092522    5339 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:04.092557    5339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d4:9f:f1:4c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/false-554000/disk.qcow2
	I0827 15:35:04.094288    5339 main.go:141] libmachine: STDOUT: 
	I0827 15:35:04.094309    5339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:04.094325    5339 client.go:171] duration metric: took 255.129666ms to LocalClient.Create
	I0827 15:35:06.096467    5339 start.go:128] duration metric: took 2.320434s to createHost
	I0827 15:35:06.096567    5339 start.go:83] releasing machines lock for "false-554000", held for 2.321138333s
	W0827 15:35:06.096864    5339 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:06.105599    5339 out.go:201] 
	W0827 15:35:06.112752    5339 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:06.112790    5339 out.go:270] * 
	* 
	W0827 15:35:06.115553    5339 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:06.125575    5339 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-615000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-615000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.258554333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-615000" primary control-plane node in "old-k8s-version-615000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-615000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:08.337439    5452 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:08.338020    5452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:08.338214    5452 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:08.338234    5452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:08.338455    5452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:08.339956    5452 out.go:352] Setting JSON to false
	I0827 15:35:08.357443    5452 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3873,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:08.357510    5452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:08.364727    5452 out.go:177] * [old-k8s-version-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:08.372595    5452 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:08.372618    5452 notify.go:220] Checking for updates...
	I0827 15:35:08.380530    5452 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:08.390569    5452 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:08.398553    5452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:08.402600    5452 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:08.406572    5452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:08.410891    5452 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:08.410959    5452 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:35:08.411008    5452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:08.412693    5452 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:35:08.419596    5452 start.go:297] selected driver: qemu2
	I0827 15:35:08.419601    5452 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:35:08.419607    5452 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:08.422090    5452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:35:08.424571    5452 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:35:08.428618    5452 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:08.428660    5452 cni.go:84] Creating CNI manager for ""
	I0827 15:35:08.428671    5452 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0827 15:35:08.428697    5452 start.go:340] cluster config:
	{Name:old-k8s-version-615000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-615000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:08.432356    5452 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:08.440523    5452 out.go:177] * Starting "old-k8s-version-615000" primary control-plane node in "old-k8s-version-615000" cluster
	I0827 15:35:08.444559    5452 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 15:35:08.444572    5452 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 15:35:08.444578    5452 cache.go:56] Caching tarball of preloaded images
	I0827 15:35:08.444633    5452 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:35:08.444638    5452 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0827 15:35:08.444693    5452 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/old-k8s-version-615000/config.json ...
	I0827 15:35:08.444704    5452 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/old-k8s-version-615000/config.json: {Name:mk86d8a31648e244a69f03e8d26e7dd036b53bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:35:08.445064    5452 start.go:360] acquireMachinesLock for old-k8s-version-615000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:08.445096    5452 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "old-k8s-version-615000"
	I0827 15:35:08.445107    5452 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-615000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-615000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:08.445136    5452 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:08.453554    5452 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:08.469557    5452 start.go:159] libmachine.API.Create for "old-k8s-version-615000" (driver="qemu2")
	I0827 15:35:08.469579    5452 client.go:168] LocalClient.Create starting
	I0827 15:35:08.469653    5452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:08.469684    5452 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:08.469692    5452 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:08.469730    5452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:08.469754    5452 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:08.469762    5452 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:08.470127    5452 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:08.637158    5452 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:08.870458    5452 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:08.870470    5452 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:08.870755    5452 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:08.880606    5452 main.go:141] libmachine: STDOUT: 
	I0827 15:35:08.880629    5452 main.go:141] libmachine: STDERR: 
	I0827 15:35:08.880689    5452 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2 +20000M
	I0827 15:35:08.888783    5452 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:08.888808    5452 main.go:141] libmachine: STDERR: 
	I0827 15:35:08.888825    5452 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:08.888829    5452 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:08.888841    5452 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:08.888868    5452 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:3c:a8:c2:4c:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:08.890582    5452 main.go:141] libmachine: STDOUT: 
	I0827 15:35:08.890610    5452 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:08.890629    5452 client.go:171] duration metric: took 421.058917ms to LocalClient.Create
	I0827 15:35:10.892761    5452 start.go:128] duration metric: took 2.44767575s to createHost
	I0827 15:35:10.892868    5452 start.go:83] releasing machines lock for "old-k8s-version-615000", held for 2.447834584s
	W0827 15:35:10.892978    5452 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:10.899901    5452 out.go:177] * Deleting "old-k8s-version-615000" in qemu2 ...
	W0827 15:35:10.926297    5452 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:10.926323    5452 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:15.928351    5452 start.go:360] acquireMachinesLock for old-k8s-version-615000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:15.928795    5452 start.go:364] duration metric: took 364.417µs to acquireMachinesLock for "old-k8s-version-615000"
	I0827 15:35:15.928929    5452 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-615000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-615000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:15.929135    5452 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:15.937670    5452 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:15.980714    5452 start.go:159] libmachine.API.Create for "old-k8s-version-615000" (driver="qemu2")
	I0827 15:35:15.980761    5452 client.go:168] LocalClient.Create starting
	I0827 15:35:15.980877    5452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:15.980965    5452 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:15.980981    5452 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:15.981037    5452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:15.981077    5452 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:15.981087    5452 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:15.981593    5452 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:16.144127    5452 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:16.510212    5452 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:16.510229    5452 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:16.510506    5452 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:16.521002    5452 main.go:141] libmachine: STDOUT: 
	I0827 15:35:16.521025    5452 main.go:141] libmachine: STDERR: 
	I0827 15:35:16.521094    5452 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2 +20000M
	I0827 15:35:16.529574    5452 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:16.529594    5452 main.go:141] libmachine: STDERR: 
	I0827 15:35:16.529607    5452 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:16.529619    5452 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:16.529635    5452 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:16.529677    5452 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:33:f3:bc:f2:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:16.531428    5452 main.go:141] libmachine: STDOUT: 
	I0827 15:35:16.531445    5452 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:16.531461    5452 client.go:171] duration metric: took 550.71375ms to LocalClient.Create
	I0827 15:35:18.533543    5452 start.go:128] duration metric: took 2.604463125s to createHost
	I0827 15:35:18.533598    5452 start.go:83] releasing machines lock for "old-k8s-version-615000", held for 2.604865125s
	W0827 15:35:18.533736    5452 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-615000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-615000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:18.541986    5452 out.go:201] 
	W0827 15:35:18.546038    5452 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:18.546045    5452 out.go:270] * 
	* 
	W0827 15:35:18.546682    5452 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:18.557847    5452 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-615000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (35.067375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-615000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-615000 create -f testdata/busybox.yaml: exit status 1 (27.136917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-615000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-615000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (29.500333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (30.26775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-615000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-615000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-615000 describe deploy/metrics-server -n kube-system: exit status 1 (26.914208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-615000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-615000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (30.162166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-615000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-615000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.187903959s)

                                                
                                                
-- stdout --
	* [old-k8s-version-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-615000" primary control-plane node in "old-k8s-version-615000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-615000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:20.984778    5498 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:20.984910    5498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:20.984914    5498 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:20.984917    5498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:20.985042    5498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:20.986073    5498 out.go:352] Setting JSON to false
	I0827 15:35:21.002304    5498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3885,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:21.002375    5498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:21.007330    5498 out.go:177] * [old-k8s-version-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:21.014244    5498 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:21.014300    5498 notify.go:220] Checking for updates...
	I0827 15:35:21.021283    5498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:21.025282    5498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:21.028343    5498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:21.031324    5498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:21.034292    5498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:21.037570    5498 config.go:182] Loaded profile config "old-k8s-version-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0827 15:35:21.041291    5498 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0827 15:35:21.042535    5498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:21.045288    5498 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:35:21.052157    5498 start.go:297] selected driver: qemu2
	I0827 15:35:21.052163    5498 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-615000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-615000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:21.052218    5498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:21.054527    5498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:21.054577    5498 cni.go:84] Creating CNI manager for ""
	I0827 15:35:21.054583    5498 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0827 15:35:21.054614    5498 start.go:340] cluster config:
	{Name:old-k8s-version-615000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-615000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:21.058220    5498 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:21.066355    5498 out.go:177] * Starting "old-k8s-version-615000" primary control-plane node in "old-k8s-version-615000" cluster
	I0827 15:35:21.070290    5498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 15:35:21.070305    5498 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 15:35:21.070309    5498 cache.go:56] Caching tarball of preloaded images
	I0827 15:35:21.070376    5498 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:35:21.070381    5498 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0827 15:35:21.070433    5498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/old-k8s-version-615000/config.json ...
	I0827 15:35:21.070914    5498 start.go:360] acquireMachinesLock for old-k8s-version-615000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:21.070944    5498 start.go:364] duration metric: took 24µs to acquireMachinesLock for "old-k8s-version-615000"
	I0827 15:35:21.070954    5498 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:35:21.070961    5498 fix.go:54] fixHost starting: 
	I0827 15:35:21.071084    5498 fix.go:112] recreateIfNeeded on old-k8s-version-615000: state=Stopped err=<nil>
	W0827 15:35:21.071093    5498 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:35:21.075165    5498 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-615000" ...
	I0827 15:35:21.082304    5498 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:21.082354    5498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:33:f3:bc:f2:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:21.084496    5498 main.go:141] libmachine: STDOUT: 
	I0827 15:35:21.084516    5498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:21.084543    5498 fix.go:56] duration metric: took 13.58375ms for fixHost
	I0827 15:35:21.084548    5498 start.go:83] releasing machines lock for "old-k8s-version-615000", held for 13.599417ms
	W0827 15:35:21.084556    5498 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:21.084586    5498 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:21.084591    5498 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:26.086597    5498 start.go:360] acquireMachinesLock for old-k8s-version-615000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:26.087181    5498 start.go:364] duration metric: took 501.333µs to acquireMachinesLock for "old-k8s-version-615000"
	I0827 15:35:26.087261    5498 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:35:26.087275    5498 fix.go:54] fixHost starting: 
	I0827 15:35:26.087871    5498 fix.go:112] recreateIfNeeded on old-k8s-version-615000: state=Stopped err=<nil>
	W0827 15:35:26.087890    5498 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:35:26.097238    5498 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-615000" ...
	I0827 15:35:26.100196    5498 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:26.100370    5498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:33:f3:bc:f2:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/old-k8s-version-615000/disk.qcow2
	I0827 15:35:26.108176    5498 main.go:141] libmachine: STDOUT: 
	I0827 15:35:26.108226    5498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:26.108309    5498 fix.go:56] duration metric: took 21.034417ms for fixHost
	I0827 15:35:26.108324    5498 start.go:83] releasing machines lock for "old-k8s-version-615000", held for 21.123542ms
	W0827 15:35:26.108471    5498 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-615000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:26.117232    5498 out.go:201] 
	W0827 15:35:26.121224    5498 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:26.121244    5498 out.go:270] * 
	* 
	W0827 15:35:26.122774    5498 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:26.132215    5498 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-615000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (62.787583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-615000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (31.899958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-615000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-615000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-615000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.06075ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-615000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-615000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (28.6925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-615000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (29.749125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-615000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-615000 --alsologtostderr -v=1: exit status 83 (40.342625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-615000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-615000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:26.394317    5519 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:26.395322    5519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:26.395327    5519 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:26.395329    5519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:26.395503    5519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:26.395707    5519 out.go:352] Setting JSON to false
	I0827 15:35:26.395713    5519 mustload.go:65] Loading cluster: old-k8s-version-615000
	I0827 15:35:26.395910    5519 config.go:182] Loaded profile config "old-k8s-version-615000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0827 15:35:26.400646    5519 out.go:177] * The control-plane node old-k8s-version-615000 host is not running: state=Stopped
	I0827 15:35:26.403554    5519 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-615000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-615000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (28.764459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (29.581833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-908000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-908000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (12.076361334s)

                                                
                                                
-- stdout --
	* [no-preload-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-908000" primary control-plane node in "no-preload-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:26.715584    5536 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:26.715723    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:26.715727    5536 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:26.715729    5536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:26.715860    5536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:26.717027    5536 out.go:352] Setting JSON to false
	I0827 15:35:26.733820    5536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3891,"bootTime":1724794235,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:26.733919    5536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:26.738887    5536 out.go:177] * [no-preload-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:26.744917    5536 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:26.744948    5536 notify.go:220] Checking for updates...
	I0827 15:35:26.752892    5536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:26.756866    5536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:26.760461    5536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:26.762969    5536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:26.765873    5536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:26.769246    5536 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:26.769310    5536 config.go:182] Loaded profile config "stopped-upgrade-443000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0827 15:35:26.769357    5536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:26.773883    5536 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:35:26.780901    5536 start.go:297] selected driver: qemu2
	I0827 15:35:26.780909    5536 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:35:26.780915    5536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:26.783235    5536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:35:26.785849    5536 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:35:26.788986    5536 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:26.789024    5536 cni.go:84] Creating CNI manager for ""
	I0827 15:35:26.789035    5536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:35:26.789044    5536 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:35:26.789075    5536 start.go:340] cluster config:
	{Name:no-preload-908000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:26.792759    5536 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.799861    5536 out.go:177] * Starting "no-preload-908000" primary control-plane node in "no-preload-908000" cluster
	I0827 15:35:26.803811    5536 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:35:26.803881    5536 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/no-preload-908000/config.json ...
	I0827 15:35:26.803897    5536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/no-preload-908000/config.json: {Name:mk65525b8918f907d304bfb58920284983909740 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:35:26.803910    5536 cache.go:107] acquiring lock: {Name:mk7af5ae5cf7ecca7233f020552354182cef7918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.803934    5536 cache.go:107] acquiring lock: {Name:mk4e53bd7e53b1ab48770856cb75dc62cc20a021 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.803975    5536 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0827 15:35:26.803984    5536 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.917µs
	I0827 15:35:26.803990    5536 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0827 15:35:26.803912    5536 cache.go:107] acquiring lock: {Name:mka7f21182ddf7d0a9274f3a0ddc9ba09d911cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.804028    5536 cache.go:107] acquiring lock: {Name:mk89becd0c213f5d0732116e949a405aead09f5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.804082    5536 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0827 15:35:26.804059    5536 cache.go:107] acquiring lock: {Name:mk8830ad59f74edcadbbabcde57eca6fdf693e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.804099    5536 cache.go:107] acquiring lock: {Name:mk1541fae4795c185f8c3e7653f48c130739f55e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.804113    5536 cache.go:107] acquiring lock: {Name:mk054d1b1499bc8c0ba09594b239a2fac2754591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.804156    5536 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0827 15:35:26.804212    5536 cache.go:107] acquiring lock: {Name:mk37ae0c45c34712af914810d022d65d69daa7bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:26.804249    5536 start.go:360] acquireMachinesLock for no-preload-908000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:26.804270    5536 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0827 15:35:26.804301    5536 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0827 15:35:26.804314    5536 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0827 15:35:26.804259    5536 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0827 15:35:26.804346    5536 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0827 15:35:26.804379    5536 start.go:364] duration metric: took 103.292µs to acquireMachinesLock for "no-preload-908000"
	I0827 15:35:26.804430    5536 start.go:93] Provisioning new machine with config: &{Name:no-preload-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:26.804469    5536 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:26.811839    5536 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:26.814245    5536 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0827 15:35:26.814887    5536 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0827 15:35:26.815628    5536 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0827 15:35:26.815691    5536 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0827 15:35:26.817201    5536 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0827 15:35:26.817218    5536 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0827 15:35:26.817373    5536 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0827 15:35:26.827890    5536 start.go:159] libmachine.API.Create for "no-preload-908000" (driver="qemu2")
	I0827 15:35:26.827917    5536 client.go:168] LocalClient.Create starting
	I0827 15:35:26.828021    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:26.828056    5536 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:26.828066    5536 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:26.828121    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:26.828146    5536 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:26.828155    5536 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:26.828497    5536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:26.989052    5536 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:27.209378    5536 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:27.209398    5536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:27.209692    5536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:27.219475    5536 main.go:141] libmachine: STDOUT: 
	I0827 15:35:27.219491    5536 main.go:141] libmachine: STDERR: 
	I0827 15:35:27.219534    5536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2 +20000M
	I0827 15:35:27.227514    5536 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:27.227529    5536 main.go:141] libmachine: STDERR: 
	I0827 15:35:27.227549    5536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:27.227554    5536 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:27.227568    5536 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:27.227595    5536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:47:bb:2a:59:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:27.229254    5536 main.go:141] libmachine: STDOUT: 
	I0827 15:35:27.229269    5536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:27.229287    5536 client.go:171] duration metric: took 401.379708ms to LocalClient.Create
	I0827 15:35:27.719662    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0827 15:35:27.756765    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0827 15:35:27.785286    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0827 15:35:27.795154    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0827 15:35:27.938285    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0827 15:35:27.971874    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0827 15:35:27.975468    5536 cache.go:162] opening:  /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0827 15:35:28.076320    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0827 15:35:28.076342    5536 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 1.272374958s
	I0827 15:35:28.076356    5536 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0827 15:35:29.229361    5536 start.go:128] duration metric: took 2.424956458s to createHost
	I0827 15:35:29.229390    5536 start.go:83] releasing machines lock for "no-preload-908000", held for 2.425079875s
	W0827 15:35:29.229432    5536 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:29.238408    5536 out.go:177] * Deleting "no-preload-908000" in qemu2 ...
	W0827 15:35:29.263019    5536 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:29.263040    5536 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:30.572529    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0827 15:35:30.572563    5536 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 3.768567083s
	I0827 15:35:30.572582    5536 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0827 15:35:30.702524    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0827 15:35:30.702549    5536 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.898669333s
	I0827 15:35:30.702562    5536 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0827 15:35:31.003058    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0827 15:35:31.003103    5536 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.199140333s
	I0827 15:35:31.003120    5536 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0827 15:35:31.105137    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0827 15:35:31.105187    5536 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.301419s
	I0827 15:35:31.105202    5536 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0827 15:35:31.709837    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0827 15:35:31.709875    5536 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.906098208s
	I0827 15:35:31.709901    5536 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0827 15:35:34.263101    5536 start.go:360] acquireMachinesLock for no-preload-908000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:35.857973    5536 cache.go:157] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0827 15:35:35.858026    5536 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.054272792s
	I0827 15:35:35.858073    5536 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0827 15:35:35.858106    5536 cache.go:87] Successfully saved all images to host disk.
	I0827 15:35:36.322790    5536 start.go:364] duration metric: took 2.059728875s to acquireMachinesLock for "no-preload-908000"
	I0827 15:35:36.322946    5536 start.go:93] Provisioning new machine with config: &{Name:no-preload-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:36.323207    5536 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:36.332664    5536 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:36.384377    5536 start.go:159] libmachine.API.Create for "no-preload-908000" (driver="qemu2")
	I0827 15:35:36.384418    5536 client.go:168] LocalClient.Create starting
	I0827 15:35:36.384546    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:36.384618    5536 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:36.384643    5536 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:36.384708    5536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:36.384755    5536 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:36.384766    5536 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:36.385295    5536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:36.591745    5536 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:36.687173    5536 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:36.687182    5536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:36.687374    5536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:36.696953    5536 main.go:141] libmachine: STDOUT: 
	I0827 15:35:36.696974    5536 main.go:141] libmachine: STDERR: 
	I0827 15:35:36.697034    5536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2 +20000M
	I0827 15:35:36.705084    5536 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:36.705101    5536 main.go:141] libmachine: STDERR: 
	I0827 15:35:36.705115    5536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:36.705117    5536 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:36.705132    5536 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:36.705166    5536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a9:42:f8:35:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:36.706905    5536 main.go:141] libmachine: STDOUT: 
	I0827 15:35:36.706920    5536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:36.706933    5536 client.go:171] duration metric: took 322.519709ms to LocalClient.Create
	I0827 15:35:38.708826    5536 start.go:128] duration metric: took 2.385638209s to createHost
	I0827 15:35:38.708940    5536 start.go:83] releasing machines lock for "no-preload-908000", held for 2.386181583s
	W0827 15:35:38.709321    5536 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:38.730570    5536 out.go:201] 
	W0827 15:35:38.734308    5536 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:38.734363    5536 out.go:270] * 
	* 
	W0827 15:35:38.736310    5536 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:38.746402    5536 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-908000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (65.213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-066000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-066000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.790689708s)

                                                
                                                
-- stdout --
	* [embed-certs-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-066000" primary control-plane node in "embed-certs-066000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:33.967370    5580 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:33.967582    5580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:33.967586    5580 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:33.967589    5580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:33.967723    5580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:33.968847    5580 out.go:352] Setting JSON to false
	I0827 15:35:33.985509    5580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3898,"bootTime":1724794235,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:33.985584    5580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:33.989934    5580 out.go:177] * [embed-certs-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:33.996908    5580 notify.go:220] Checking for updates...
	I0827 15:35:34.001886    5580 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:34.002902    5580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:34.009857    5580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:34.013704    5580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:34.015865    5580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:34.021926    5580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:34.026168    5580 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:34.026242    5580 config.go:182] Loaded profile config "no-preload-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:34.026298    5580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:34.030840    5580 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:35:34.037854    5580 start.go:297] selected driver: qemu2
	I0827 15:35:34.037860    5580 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:35:34.037867    5580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:34.040282    5580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:35:34.043854    5580 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:35:34.047900    5580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:34.047931    5580 cni.go:84] Creating CNI manager for ""
	I0827 15:35:34.047938    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:35:34.047942    5580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:35:34.047977    5580 start.go:340] cluster config:
	{Name:embed-certs-066000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:34.051957    5580 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:34.056895    5580 out.go:177] * Starting "embed-certs-066000" primary control-plane node in "embed-certs-066000" cluster
	I0827 15:35:34.060838    5580 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:35:34.060854    5580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:35:34.060863    5580 cache.go:56] Caching tarball of preloaded images
	I0827 15:35:34.060916    5580 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:35:34.060921    5580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:35:34.060979    5580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/embed-certs-066000/config.json ...
	I0827 15:35:34.060990    5580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/embed-certs-066000/config.json: {Name:mk3224aba0bb2e35238af58e31a035f65574701e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:35:34.061203    5580 start.go:360] acquireMachinesLock for embed-certs-066000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:34.061235    5580 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "embed-certs-066000"
	I0827 15:35:34.061246    5580 start.go:93] Provisioning new machine with config: &{Name:embed-certs-066000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:34.061284    5580 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:34.069816    5580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:34.086139    5580 start.go:159] libmachine.API.Create for "embed-certs-066000" (driver="qemu2")
	I0827 15:35:34.086164    5580 client.go:168] LocalClient.Create starting
	I0827 15:35:34.086227    5580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:34.086259    5580 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:34.086269    5580 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:34.086314    5580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:34.086337    5580 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:34.086345    5580 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:34.086656    5580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:34.250850    5580 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:34.301056    5580 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:34.301063    5580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:34.301292    5580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:34.310538    5580 main.go:141] libmachine: STDOUT: 
	I0827 15:35:34.310558    5580 main.go:141] libmachine: STDERR: 
	I0827 15:35:34.310599    5580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2 +20000M
	I0827 15:35:34.318764    5580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:34.318782    5580 main.go:141] libmachine: STDERR: 
	I0827 15:35:34.318795    5580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:34.318799    5580 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:34.318816    5580 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:34.318841    5580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:1c:db:23:b8:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:34.320463    5580 main.go:141] libmachine: STDOUT: 
	I0827 15:35:34.320479    5580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:34.320498    5580 client.go:171] duration metric: took 234.338917ms to LocalClient.Create
	I0827 15:35:36.322603    5580 start.go:128] duration metric: took 2.261377166s to createHost
	I0827 15:35:36.322659    5580 start.go:83] releasing machines lock for "embed-certs-066000", held for 2.261489042s
	W0827 15:35:36.322764    5580 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:36.340747    5580 out.go:177] * Deleting "embed-certs-066000" in qemu2 ...
	W0827 15:35:36.365690    5580 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:36.365706    5580 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:41.367649    5580 start.go:360] acquireMachinesLock for embed-certs-066000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:41.367765    5580 start.go:364] duration metric: took 88.083µs to acquireMachinesLock for "embed-certs-066000"
	I0827 15:35:41.367783    5580 start.go:93] Provisioning new machine with config: &{Name:embed-certs-066000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:41.367871    5580 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:41.376017    5580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:41.406146    5580 start.go:159] libmachine.API.Create for "embed-certs-066000" (driver="qemu2")
	I0827 15:35:41.406187    5580 client.go:168] LocalClient.Create starting
	I0827 15:35:41.406272    5580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:41.406317    5580 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:41.406331    5580 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:41.406384    5580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:41.406408    5580 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:41.406423    5580 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:41.406955    5580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:41.567275    5580 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:41.658673    5580 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:41.658679    5580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:41.658916    5580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:41.668412    5580 main.go:141] libmachine: STDOUT: 
	I0827 15:35:41.668432    5580 main.go:141] libmachine: STDERR: 
	I0827 15:35:41.668489    5580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2 +20000M
	I0827 15:35:41.676514    5580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:41.676530    5580 main.go:141] libmachine: STDERR: 
	I0827 15:35:41.676541    5580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:41.676547    5580 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:41.676555    5580 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:41.676576    5580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:5d:3a:6e:54:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:41.678182    5580 main.go:141] libmachine: STDOUT: 
	I0827 15:35:41.678197    5580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:41.678210    5580 client.go:171] duration metric: took 272.028709ms to LocalClient.Create
	I0827 15:35:43.680329    5580 start.go:128] duration metric: took 2.312495167s to createHost
	I0827 15:35:43.680387    5580 start.go:83] releasing machines lock for "embed-certs-066000", held for 2.31268475s
	W0827 15:35:43.680721    5580 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:43.696391    5580 out.go:201] 
	W0827 15:35:43.699429    5580 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:43.699485    5580 out.go:270] * 
	* 
	W0827 15:35:43.701858    5580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:43.715336    5580 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-066000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (65.636292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-908000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-908000 create -f testdata/busybox.yaml: exit status 1 (29.740541ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-908000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-908000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (29.739625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (29.436875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-908000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-908000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-908000 describe deploy/metrics-server -n kube-system: exit status 1 (26.41875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-908000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-908000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (29.711958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-908000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-908000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.188307333s)

                                                
                                                
-- stdout --
	* [no-preload-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-908000" primary control-plane node in "no-preload-908000" cluster
	* Restarting existing qemu2 VM for "no-preload-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:41.187696    5622 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:41.187848    5622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:41.187852    5622 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:41.187854    5622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:41.187998    5622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:41.189044    5622 out.go:352] Setting JSON to false
	I0827 15:35:41.205263    5622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3906,"bootTime":1724794235,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:41.205326    5622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:41.210366    5622 out.go:177] * [no-preload-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:41.217480    5622 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:41.217524    5622 notify.go:220] Checking for updates...
	I0827 15:35:41.225459    5622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:41.229473    5622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:41.232384    5622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:41.235479    5622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:41.238491    5622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:41.241811    5622 config.go:182] Loaded profile config "no-preload-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:41.242074    5622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:41.246429    5622 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:35:41.253473    5622 start.go:297] selected driver: qemu2
	I0827 15:35:41.253482    5622 start.go:901] validating driver "qemu2" against &{Name:no-preload-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:41.253554    5622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:41.255973    5622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:41.256012    5622 cni.go:84] Creating CNI manager for ""
	I0827 15:35:41.256023    5622 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:35:41.256060    5622 start.go:340] cluster config:
	{Name:no-preload-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:41.259629    5622 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.268336    5622 out.go:177] * Starting "no-preload-908000" primary control-plane node in "no-preload-908000" cluster
	I0827 15:35:41.272539    5622 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:35:41.272629    5622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/no-preload-908000/config.json ...
	I0827 15:35:41.272673    5622 cache.go:107] acquiring lock: {Name:mka7f21182ddf7d0a9274f3a0ddc9ba09d911cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272696    5622 cache.go:107] acquiring lock: {Name:mk054d1b1499bc8c0ba09594b239a2fac2754591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272699    5622 cache.go:107] acquiring lock: {Name:mk8830ad59f74edcadbbabcde57eca6fdf693e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272751    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0827 15:35:41.272752    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0827 15:35:41.272759    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0827 15:35:41.272760    5622 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 63.417µs
	I0827 15:35:41.272774    5622 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0827 15:35:41.272764    5622 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 66.125µs
	I0827 15:35:41.272775    5622 cache.go:107] acquiring lock: {Name:mk37ae0c45c34712af914810d022d65d69daa7bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272784    5622 cache.go:107] acquiring lock: {Name:mk89becd0c213f5d0732116e949a405aead09f5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272789    5622 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 124.541µs
	I0827 15:35:41.272794    5622 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0827 15:35:41.272674    5622 cache.go:107] acquiring lock: {Name:mk7af5ae5cf7ecca7233f020552354182cef7918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272672    5622 cache.go:107] acquiring lock: {Name:mk4e53bd7e53b1ab48770856cb75dc62cc20a021 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272823    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0827 15:35:41.272825    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0827 15:35:41.272828    5622 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 44.542µs
	I0827 15:35:41.272830    5622 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 55.875µs
	I0827 15:35:41.272832    5622 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0827 15:35:41.272835    5622 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0827 15:35:41.272840    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0827 15:35:41.272845    5622 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 172.667µs
	I0827 15:35:41.272851    5622 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0827 15:35:41.272779    5622 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0827 15:35:41.272865    5622 cache.go:107] acquiring lock: {Name:mk1541fae4795c185f8c3e7653f48c130739f55e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:41.272920    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0827 15:35:41.272925    5622 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 71.875µs
	I0827 15:35:41.272935    5622 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0827 15:35:41.272945    5622 cache.go:115] /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0827 15:35:41.272950    5622 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 288.125µs
	I0827 15:35:41.272955    5622 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0827 15:35:41.272959    5622 cache.go:87] Successfully saved all images to host disk.
	I0827 15:35:41.272977    5622 start.go:360] acquireMachinesLock for no-preload-908000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:41.273017    5622 start.go:364] duration metric: took 32.834µs to acquireMachinesLock for "no-preload-908000"
	I0827 15:35:41.273032    5622 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:35:41.273040    5622 fix.go:54] fixHost starting: 
	I0827 15:35:41.273170    5622 fix.go:112] recreateIfNeeded on no-preload-908000: state=Stopped err=<nil>
	W0827 15:35:41.273179    5622 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:35:41.280427    5622 out.go:177] * Restarting existing qemu2 VM for "no-preload-908000" ...
	I0827 15:35:41.284491    5622 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:41.284537    5622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a9:42:f8:35:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:41.286715    5622 main.go:141] libmachine: STDOUT: 
	I0827 15:35:41.286736    5622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:41.286763    5622 fix.go:56] duration metric: took 13.72625ms for fixHost
	I0827 15:35:41.286767    5622 start.go:83] releasing machines lock for "no-preload-908000", held for 13.7465ms
	W0827 15:35:41.286774    5622 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:41.286809    5622 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:41.286813    5622 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:46.288876    5622 start.go:360] acquireMachinesLock for no-preload-908000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:46.289294    5622 start.go:364] duration metric: took 329.792µs to acquireMachinesLock for "no-preload-908000"
	I0827 15:35:46.289375    5622 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:35:46.289392    5622 fix.go:54] fixHost starting: 
	I0827 15:35:46.290133    5622 fix.go:112] recreateIfNeeded on no-preload-908000: state=Stopped err=<nil>
	W0827 15:35:46.290164    5622 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:35:46.294561    5622 out.go:177] * Restarting existing qemu2 VM for "no-preload-908000" ...
	I0827 15:35:46.302588    5622 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:46.302936    5622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a9:42:f8:35:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/no-preload-908000/disk.qcow2
	I0827 15:35:46.312070    5622 main.go:141] libmachine: STDOUT: 
	I0827 15:35:46.312132    5622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:46.312191    5622 fix.go:56] duration metric: took 22.800041ms for fixHost
	I0827 15:35:46.312214    5622 start.go:83] releasing machines lock for "no-preload-908000", held for 22.897875ms
	W0827 15:35:46.312413    5622 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:46.319524    5622 out.go:201] 
	W0827 15:35:46.323551    5622 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:46.323586    5622 out.go:270] * 
	* 
	W0827 15:35:46.326197    5622 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:46.334505    5622 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-908000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (66.069042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-066000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-066000 create -f testdata/busybox.yaml: exit status 1 (30.322584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-066000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (29.6475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (29.726834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-066000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-066000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-066000 describe deploy/metrics-server -n kube-system: exit status 1 (27.334458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-066000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (29.841042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-066000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-066000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.191247334s)

                                                
                                                
-- stdout --
	* [embed-certs-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-066000" primary control-plane node in "embed-certs-066000" cluster
	* Restarting existing qemu2 VM for "embed-certs-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:45.978683    5659 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:45.978829    5659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:45.978833    5659 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:45.978835    5659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:45.978943    5659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:45.979947    5659 out.go:352] Setting JSON to false
	I0827 15:35:45.996036    5659 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3910,"bootTime":1724794235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:45.996115    5659 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:46.000604    5659 out.go:177] * [embed-certs-066000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:46.007619    5659 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:46.007662    5659 notify.go:220] Checking for updates...
	I0827 15:35:46.015636    5659 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:46.018603    5659 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:46.021580    5659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:46.024603    5659 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:46.027627    5659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:46.030919    5659 config.go:182] Loaded profile config "embed-certs-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:46.031201    5659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:46.034567    5659 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:35:46.041624    5659 start.go:297] selected driver: qemu2
	I0827 15:35:46.041631    5659 start.go:901] validating driver "qemu2" against &{Name:embed-certs-066000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-066000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:46.041712    5659 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:46.043937    5659 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:46.043962    5659 cni.go:84] Creating CNI manager for ""
	I0827 15:35:46.043969    5659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:35:46.043997    5659 start.go:340] cluster config:
	{Name:embed-certs-066000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-066000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:46.047497    5659 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:46.054563    5659 out.go:177] * Starting "embed-certs-066000" primary control-plane node in "embed-certs-066000" cluster
	I0827 15:35:46.058610    5659 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:35:46.058628    5659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:35:46.058639    5659 cache.go:56] Caching tarball of preloaded images
	I0827 15:35:46.058717    5659 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:35:46.058723    5659 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:35:46.058782    5659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/embed-certs-066000/config.json ...
	I0827 15:35:46.059276    5659 start.go:360] acquireMachinesLock for embed-certs-066000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:46.059308    5659 start.go:364] duration metric: took 23.334µs to acquireMachinesLock for "embed-certs-066000"
	I0827 15:35:46.059318    5659 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:35:46.059323    5659 fix.go:54] fixHost starting: 
	I0827 15:35:46.059446    5659 fix.go:112] recreateIfNeeded on embed-certs-066000: state=Stopped err=<nil>
	W0827 15:35:46.059454    5659 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:35:46.066575    5659 out.go:177] * Restarting existing qemu2 VM for "embed-certs-066000" ...
	I0827 15:35:46.069543    5659 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:46.069578    5659 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:5d:3a:6e:54:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:46.071651    5659 main.go:141] libmachine: STDOUT: 
	I0827 15:35:46.071674    5659 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:46.071705    5659 fix.go:56] duration metric: took 12.382375ms for fixHost
	I0827 15:35:46.071710    5659 start.go:83] releasing machines lock for "embed-certs-066000", held for 12.397458ms
	W0827 15:35:46.071725    5659 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:46.071769    5659 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:46.071774    5659 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:51.073850    5659 start.go:360] acquireMachinesLock for embed-certs-066000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:51.074347    5659 start.go:364] duration metric: took 397.833µs to acquireMachinesLock for "embed-certs-066000"
	I0827 15:35:51.074495    5659 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:35:51.074518    5659 fix.go:54] fixHost starting: 
	I0827 15:35:51.075313    5659 fix.go:112] recreateIfNeeded on embed-certs-066000: state=Stopped err=<nil>
	W0827 15:35:51.075338    5659 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:35:51.093909    5659 out.go:177] * Restarting existing qemu2 VM for "embed-certs-066000" ...
	I0827 15:35:51.098516    5659 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:51.098808    5659 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:5d:3a:6e:54:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/embed-certs-066000/disk.qcow2
	I0827 15:35:51.108157    5659 main.go:141] libmachine: STDOUT: 
	I0827 15:35:51.108221    5659 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:51.108290    5659 fix.go:56] duration metric: took 33.776709ms for fixHost
	I0827 15:35:51.108302    5659 start.go:83] releasing machines lock for "embed-certs-066000", held for 33.931583ms
	W0827 15:35:51.108505    5659 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:51.116645    5659 out.go:201] 
	W0827 15:35:51.118108    5659 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:51.118130    5659 out.go:270] * 
	* 
	W0827 15:35:51.120936    5659 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:51.129652    5659 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-066000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (67.037583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-908000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (32.700917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-908000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-908000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-908000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.05225ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-908000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-908000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (28.39025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-908000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (28.747917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-908000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-908000 --alsologtostderr -v=1: exit status 83 (39.860333ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:46.601371    5678 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:46.601515    5678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:46.601518    5678 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:46.601521    5678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:46.601664    5678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:46.601887    5678 out.go:352] Setting JSON to false
	I0827 15:35:46.601895    5678 mustload.go:65] Loading cluster: no-preload-908000
	I0827 15:35:46.602078    5678 config.go:182] Loaded profile config "no-preload-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:46.606049    5678 out.go:177] * The control-plane node no-preload-908000 host is not running: state=Stopped
	I0827 15:35:46.609800    5678 out.go:177]   To start a cluster, run: "minikube start -p no-preload-908000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-908000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (28.499708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (28.825459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-943000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-943000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.850724167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-943000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-943000" primary control-plane node in "default-k8s-diff-port-943000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-943000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:47.022102    5702 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:47.022234    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:47.022237    5702 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:47.022240    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:47.022373    5702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:47.023515    5702 out.go:352] Setting JSON to false
	I0827 15:35:47.039788    5702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3912,"bootTime":1724794235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:47.039852    5702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:47.044071    5702 out.go:177] * [default-k8s-diff-port-943000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:47.050996    5702 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:47.051055    5702 notify.go:220] Checking for updates...
	I0827 15:35:47.058947    5702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:47.062941    5702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:47.065995    5702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:47.068989    5702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:47.071922    5702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:47.075350    5702 config.go:182] Loaded profile config "embed-certs-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:47.075407    5702 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:47.075460    5702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:47.078930    5702 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:35:47.086010    5702 start.go:297] selected driver: qemu2
	I0827 15:35:47.086019    5702 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:35:47.086026    5702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:47.088298    5702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 15:35:47.092889    5702 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:35:47.096016    5702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:35:47.096032    5702 cni.go:84] Creating CNI manager for ""
	I0827 15:35:47.096040    5702 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:35:47.096044    5702 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:35:47.096079    5702 start.go:340] cluster config:
	{Name:default-k8s-diff-port-943000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:47.099741    5702 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:47.108983    5702 out.go:177] * Starting "default-k8s-diff-port-943000" primary control-plane node in "default-k8s-diff-port-943000" cluster
	I0827 15:35:47.112939    5702 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:35:47.112954    5702 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:35:47.112962    5702 cache.go:56] Caching tarball of preloaded images
	I0827 15:35:47.113023    5702 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:35:47.113029    5702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:35:47.113084    5702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/default-k8s-diff-port-943000/config.json ...
	I0827 15:35:47.113096    5702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/default-k8s-diff-port-943000/config.json: {Name:mk4e1e8604e68526acef75e69857b87b5e140d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:35:47.113333    5702 start.go:360] acquireMachinesLock for default-k8s-diff-port-943000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:47.113371    5702 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "default-k8s-diff-port-943000"
	I0827 15:35:47.113384    5702 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:47.113415    5702 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:47.120972    5702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:47.138972    5702 start.go:159] libmachine.API.Create for "default-k8s-diff-port-943000" (driver="qemu2")
	I0827 15:35:47.139007    5702 client.go:168] LocalClient.Create starting
	I0827 15:35:47.139088    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:47.139121    5702 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:47.139129    5702 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:47.139170    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:47.139195    5702 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:47.139204    5702 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:47.139552    5702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:47.301876    5702 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:47.351215    5702 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:47.351220    5702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:47.351452    5702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:35:47.360562    5702 main.go:141] libmachine: STDOUT: 
	I0827 15:35:47.360578    5702 main.go:141] libmachine: STDERR: 
	I0827 15:35:47.360634    5702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2 +20000M
	I0827 15:35:47.368459    5702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:47.368474    5702 main.go:141] libmachine: STDERR: 
	I0827 15:35:47.368491    5702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:35:47.368506    5702 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:47.368519    5702 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:47.368545    5702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f3:9d:5c:57:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:35:47.370129    5702 main.go:141] libmachine: STDOUT: 
	I0827 15:35:47.370146    5702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:47.370162    5702 client.go:171] duration metric: took 231.157375ms to LocalClient.Create
	I0827 15:35:49.372263    5702 start.go:128] duration metric: took 2.258904042s to createHost
	I0827 15:35:49.372336    5702 start.go:83] releasing machines lock for "default-k8s-diff-port-943000", held for 2.259027666s
	W0827 15:35:49.372395    5702 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:49.384334    5702 out.go:177] * Deleting "default-k8s-diff-port-943000" in qemu2 ...
	W0827 15:35:49.414231    5702 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:49.414252    5702 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:54.416250    5702 start.go:360] acquireMachinesLock for default-k8s-diff-port-943000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:54.416712    5702 start.go:364] duration metric: took 370.666µs to acquireMachinesLock for "default-k8s-diff-port-943000"
	I0827 15:35:54.416847    5702 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:54.417112    5702 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:54.426498    5702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:54.476860    5702 start.go:159] libmachine.API.Create for "default-k8s-diff-port-943000" (driver="qemu2")
	I0827 15:35:54.476910    5702 client.go:168] LocalClient.Create starting
	I0827 15:35:54.477013    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:54.477083    5702 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:54.477104    5702 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:54.477174    5702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:54.477226    5702 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:54.477236    5702 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:54.478083    5702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:54.651355    5702 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:54.781259    5702 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:54.781265    5702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:54.781490    5702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:35:54.791551    5702 main.go:141] libmachine: STDOUT: 
	I0827 15:35:54.791574    5702 main.go:141] libmachine: STDERR: 
	I0827 15:35:54.791629    5702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2 +20000M
	I0827 15:35:54.800639    5702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:54.800656    5702 main.go:141] libmachine: STDERR: 
	I0827 15:35:54.800678    5702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:35:54.800683    5702 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:54.800695    5702 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:54.800719    5702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:15:2b:bf:0b:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:35:54.802317    5702 main.go:141] libmachine: STDOUT: 
	I0827 15:35:54.802340    5702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:54.802354    5702 client.go:171] duration metric: took 325.447583ms to LocalClient.Create
	I0827 15:35:56.804465    5702 start.go:128] duration metric: took 2.387378208s to createHost
	I0827 15:35:56.804528    5702 start.go:83] releasing machines lock for "default-k8s-diff-port-943000", held for 2.387869875s
	W0827 15:35:56.804861    5702 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-943000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-943000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:56.815371    5702 out.go:201] 
	W0827 15:35:56.818418    5702 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:35:56.818444    5702 out.go:270] * 
	* 
	W0827 15:35:56.821313    5702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:35:56.830313    5702 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-943000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (65.421333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-066000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (32.169833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-066000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.786166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (28.566833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-066000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (29.721666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-066000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-066000 --alsologtostderr -v=1: exit status 83 (41.986541ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-066000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-066000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:51.397969    5724 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:51.398336    5724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:51.398340    5724 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:51.398343    5724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:51.398537    5724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:51.398798    5724 out.go:352] Setting JSON to false
	I0827 15:35:51.398806    5724 mustload.go:65] Loading cluster: embed-certs-066000
	I0827 15:35:51.399142    5724 config.go:182] Loaded profile config "embed-certs-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:51.403698    5724 out.go:177] * The control-plane node embed-certs-066000 host is not running: state=Stopped
	I0827 15:35:51.407646    5724 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-066000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-066000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (28.614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (29.549916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-666000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-666000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.122494084s)

                                                
                                                
-- stdout --
	* [newest-cni-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-666000" primary control-plane node in "newest-cni-666000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-666000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:35:51.712230    5741 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:35:51.712446    5741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:51.712450    5741 out.go:358] Setting ErrFile to fd 2...
	I0827 15:35:51.712452    5741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:35:51.712579    5741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:35:51.713879    5741 out.go:352] Setting JSON to false
	I0827 15:35:51.730319    5741 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3916,"bootTime":1724794235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:35:51.730389    5741 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:35:51.735659    5741 out.go:177] * [newest-cni-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:35:51.742673    5741 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:35:51.742726    5741 notify.go:220] Checking for updates...
	I0827 15:35:51.749609    5741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:35:51.751100    5741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:35:51.755648    5741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:35:51.758637    5741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:35:51.760104    5741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:35:51.764079    5741 config.go:182] Loaded profile config "default-k8s-diff-port-943000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:51.764138    5741 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:35:51.764201    5741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:35:51.767614    5741 out.go:177] * Using the qemu2 driver based on user configuration
	I0827 15:35:51.773577    5741 start.go:297] selected driver: qemu2
	I0827 15:35:51.773583    5741 start.go:901] validating driver "qemu2" against <nil>
	I0827 15:35:51.773589    5741 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:35:51.775909    5741 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0827 15:35:51.775931    5741 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0827 15:35:51.782494    5741 out.go:177] * Automatically selected the socket_vmnet network
	I0827 15:35:51.785706    5741 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0827 15:35:51.785751    5741 cni.go:84] Creating CNI manager for ""
	I0827 15:35:51.785760    5741 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:35:51.785764    5741 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 15:35:51.785798    5741 start.go:340] cluster config:
	{Name:newest-cni-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:35:51.789569    5741 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:35:51.797630    5741 out.go:177] * Starting "newest-cni-666000" primary control-plane node in "newest-cni-666000" cluster
	I0827 15:35:51.801599    5741 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:35:51.801615    5741 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:35:51.801624    5741 cache.go:56] Caching tarball of preloaded images
	I0827 15:35:51.801688    5741 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:35:51.801694    5741 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:35:51.801760    5741 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/newest-cni-666000/config.json ...
	I0827 15:35:51.801772    5741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/newest-cni-666000/config.json: {Name:mkcc84741cde706a7b5a16d0f54ae2a4ed13f978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 15:35:51.802136    5741 start.go:360] acquireMachinesLock for newest-cni-666000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:51.802172    5741 start.go:364] duration metric: took 30µs to acquireMachinesLock for "newest-cni-666000"
	I0827 15:35:51.802184    5741 start.go:93] Provisioning new machine with config: &{Name:newest-cni-666000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:51.802229    5741 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:51.810623    5741 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:51.828747    5741 start.go:159] libmachine.API.Create for "newest-cni-666000" (driver="qemu2")
	I0827 15:35:51.828778    5741 client.go:168] LocalClient.Create starting
	I0827 15:35:51.828833    5741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:51.828865    5741 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:51.828876    5741 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:51.828915    5741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:51.828937    5741 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:51.828943    5741 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:51.829301    5741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:51.986443    5741 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:52.015668    5741 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:52.015673    5741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:52.015914    5741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:35:52.026375    5741 main.go:141] libmachine: STDOUT: 
	I0827 15:35:52.026399    5741 main.go:141] libmachine: STDERR: 
	I0827 15:35:52.026453    5741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2 +20000M
	I0827 15:35:52.034606    5741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:52.034622    5741 main.go:141] libmachine: STDERR: 
	I0827 15:35:52.034644    5741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:35:52.034649    5741 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:52.034662    5741 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:52.034689    5741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:fc:28:f7:08:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:35:52.036255    5741 main.go:141] libmachine: STDOUT: 
	I0827 15:35:52.036270    5741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:52.036296    5741 client.go:171] duration metric: took 207.511875ms to LocalClient.Create
	I0827 15:35:54.038409    5741 start.go:128] duration metric: took 2.236230666s to createHost
	I0827 15:35:54.038518    5741 start.go:83] releasing machines lock for "newest-cni-666000", held for 2.236407166s
	W0827 15:35:54.038582    5741 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:54.049573    5741 out.go:177] * Deleting "newest-cni-666000" in qemu2 ...
	W0827 15:35:54.083180    5741 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:35:54.083200    5741 start.go:729] Will try again in 5 seconds ...
	I0827 15:35:59.085227    5741 start.go:360] acquireMachinesLock for newest-cni-666000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:35:59.085562    5741 start.go:364] duration metric: took 264.042µs to acquireMachinesLock for "newest-cni-666000"
	I0827 15:35:59.085636    5741 start.go:93] Provisioning new machine with config: &{Name:newest-cni-666000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 15:35:59.085859    5741 start.go:125] createHost starting for "" (driver="qemu2")
	I0827 15:35:59.094587    5741 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 15:35:59.144334    5741 start.go:159] libmachine.API.Create for "newest-cni-666000" (driver="qemu2")
	I0827 15:35:59.144392    5741 client.go:168] LocalClient.Create starting
	I0827 15:35:59.144498    5741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/ca.pem
	I0827 15:35:59.144544    5741 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:59.144558    5741 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:59.144647    5741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19522-983/.minikube/certs/cert.pem
	I0827 15:35:59.144684    5741 main.go:141] libmachine: Decoding PEM data...
	I0827 15:35:59.144695    5741 main.go:141] libmachine: Parsing certificate...
	I0827 15:35:59.145386    5741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19522-983/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso...
	I0827 15:35:59.315190    5741 main.go:141] libmachine: Creating SSH key...
	I0827 15:35:59.733258    5741 main.go:141] libmachine: Creating Disk image...
	I0827 15:35:59.733271    5741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0827 15:35:59.733478    5741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2.raw /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:35:59.743220    5741 main.go:141] libmachine: STDOUT: 
	I0827 15:35:59.743244    5741 main.go:141] libmachine: STDERR: 
	I0827 15:35:59.743323    5741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2 +20000M
	I0827 15:35:59.751541    5741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0827 15:35:59.751558    5741 main.go:141] libmachine: STDERR: 
	I0827 15:35:59.751568    5741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:35:59.751573    5741 main.go:141] libmachine: Starting QEMU VM...
	I0827 15:35:59.751586    5741 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:35:59.751620    5741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:54:8c:30:15:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:35:59.753208    5741 main.go:141] libmachine: STDOUT: 
	I0827 15:35:59.753223    5741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:35:59.753235    5741 client.go:171] duration metric: took 608.8565ms to LocalClient.Create
	I0827 15:36:01.755479    5741 start.go:128] duration metric: took 2.669627375s to createHost
	I0827 15:36:01.755602    5741 start.go:83] releasing machines lock for "newest-cni-666000", held for 2.670102958s
	W0827 15:36:01.755912    5741 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-666000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-666000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:36:01.764490    5741 out.go:201] 
	W0827 15:36:01.773989    5741 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:36:01.774019    5741 out.go:270] * 
	* 
	W0827 15:36:01.776631    5741 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:36:01.789449    5741 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-666000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000: exit status 7 (62.307167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-666000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-943000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-943000 create -f testdata/busybox.yaml: exit status 1 (29.57925ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-943000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-943000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (28.49075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (29.429875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-943000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-943000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-943000 describe deploy/metrics-server -n kube-system: exit status 1 (26.739209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-943000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-943000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (29.054041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-943000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-943000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.025266334s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-943000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-943000" primary control-plane node in "default-k8s-diff-port-943000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-943000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-943000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:36:00.852088    5798 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:36:00.852194    5798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:00.852198    5798 out.go:358] Setting ErrFile to fd 2...
	I0827 15:36:00.852200    5798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:00.852320    5798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:36:00.853333    5798 out.go:352] Setting JSON to false
	I0827 15:36:00.869395    5798 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3925,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:36:00.869479    5798 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:36:00.874246    5798 out.go:177] * [default-k8s-diff-port-943000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:36:00.881241    5798 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:36:00.881293    5798 notify.go:220] Checking for updates...
	I0827 15:36:00.888156    5798 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:36:00.891176    5798 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:36:00.894369    5798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:36:00.897158    5798 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:36:00.900192    5798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:36:00.903511    5798 config.go:182] Loaded profile config "default-k8s-diff-port-943000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:36:00.903766    5798 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:36:00.907168    5798 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:36:00.914220    5798 start.go:297] selected driver: qemu2
	I0827 15:36:00.914228    5798 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:36:00.914298    5798 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:36:00.916630    5798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 15:36:00.916657    5798 cni.go:84] Creating CNI manager for ""
	I0827 15:36:00.916666    5798 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:36:00.916699    5798 start.go:340] cluster config:
	{Name:default-k8s-diff-port-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-943000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:36:00.920193    5798 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:36:00.929177    5798 out.go:177] * Starting "default-k8s-diff-port-943000" primary control-plane node in "default-k8s-diff-port-943000" cluster
	I0827 15:36:00.933123    5798 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:36:00.933144    5798 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:36:00.933153    5798 cache.go:56] Caching tarball of preloaded images
	I0827 15:36:00.933214    5798 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:36:00.933227    5798 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:36:00.933280    5798 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/default-k8s-diff-port-943000/config.json ...
	I0827 15:36:00.933732    5798 start.go:360] acquireMachinesLock for default-k8s-diff-port-943000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:36:01.755735    5798 start.go:364] duration metric: took 821.9755ms to acquireMachinesLock for "default-k8s-diff-port-943000"
	I0827 15:36:01.755944    5798 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:36:01.755993    5798 fix.go:54] fixHost starting: 
	I0827 15:36:01.756754    5798 fix.go:112] recreateIfNeeded on default-k8s-diff-port-943000: state=Stopped err=<nil>
	W0827 15:36:01.756816    5798 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:36:01.772455    5798 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-943000" ...
	I0827 15:36:01.778469    5798 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:36:01.778700    5798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:15:2b:bf:0b:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:36:01.788951    5798 main.go:141] libmachine: STDOUT: 
	I0827 15:36:01.789030    5798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:36:01.789176    5798 fix.go:56] duration metric: took 33.191083ms for fixHost
	I0827 15:36:01.789203    5798 start.go:83] releasing machines lock for "default-k8s-diff-port-943000", held for 33.378583ms
	W0827 15:36:01.789248    5798 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:36:01.789419    5798 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:36:01.789440    5798 start.go:729] Will try again in 5 seconds ...
	I0827 15:36:06.791515    5798 start.go:360] acquireMachinesLock for default-k8s-diff-port-943000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:36:06.791939    5798 start.go:364] duration metric: took 297.584µs to acquireMachinesLock for "default-k8s-diff-port-943000"
	I0827 15:36:06.792069    5798 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:36:06.792089    5798 fix.go:54] fixHost starting: 
	I0827 15:36:06.792788    5798 fix.go:112] recreateIfNeeded on default-k8s-diff-port-943000: state=Stopped err=<nil>
	W0827 15:36:06.792815    5798 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:36:06.802383    5798 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-943000" ...
	I0827 15:36:06.805433    5798 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:36:06.805628    5798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:15:2b:bf:0b:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/default-k8s-diff-port-943000/disk.qcow2
	I0827 15:36:06.814503    5798 main.go:141] libmachine: STDOUT: 
	I0827 15:36:06.814592    5798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:36:06.814681    5798 fix.go:56] duration metric: took 22.589625ms for fixHost
	I0827 15:36:06.814706    5798 start.go:83] releasing machines lock for "default-k8s-diff-port-943000", held for 22.7435ms
	W0827 15:36:06.814888    5798 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-943000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-943000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:36:06.822356    5798 out.go:201] 
	W0827 15:36:06.825389    5798 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:36:06.825456    5798 out.go:270] * 
	* 
	W0827 15:36:06.828098    5798 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:36:06.837326    5798 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-943000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (67.886958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-666000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-666000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.1846035s)

                                                
                                                
-- stdout --
	* [newest-cni-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-666000" primary control-plane node in "newest-cni-666000" cluster
	* Restarting existing qemu2 VM for "newest-cni-666000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-666000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:36:03.963504    5828 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:36:03.963639    5828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:03.963642    5828 out.go:358] Setting ErrFile to fd 2...
	I0827 15:36:03.963645    5828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:03.963760    5828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:36:03.964785    5828 out.go:352] Setting JSON to false
	I0827 15:36:03.980920    5828 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3928,"bootTime":1724794235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 15:36:03.980991    5828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 15:36:03.986175    5828 out.go:177] * [newest-cni-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 15:36:03.993395    5828 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 15:36:03.993474    5828 notify.go:220] Checking for updates...
	I0827 15:36:04.000330    5828 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 15:36:04.004386    5828 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 15:36:04.007350    5828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 15:36:04.010357    5828 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 15:36:04.013386    5828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 15:36:04.016640    5828 config.go:182] Loaded profile config "newest-cni-666000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:36:04.016933    5828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 15:36:04.020335    5828 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 15:36:04.026290    5828 start.go:297] selected driver: qemu2
	I0827 15:36:04.026297    5828 start.go:901] validating driver "qemu2" against &{Name:newest-cni-666000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:36:04.026351    5828 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 15:36:04.028729    5828 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0827 15:36:04.028767    5828 cni.go:84] Creating CNI manager for ""
	I0827 15:36:04.028774    5828 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 15:36:04.028800    5828 start.go:340] cluster config:
	{Name:newest-cni-666000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-666000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 15:36:04.032197    5828 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 15:36:04.037415    5828 out.go:177] * Starting "newest-cni-666000" primary control-plane node in "newest-cni-666000" cluster
	I0827 15:36:04.044382    5828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 15:36:04.044396    5828 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 15:36:04.044405    5828 cache.go:56] Caching tarball of preloaded images
	I0827 15:36:04.044464    5828 preload.go:172] Found /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 15:36:04.044469    5828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 15:36:04.044538    5828 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/newest-cni-666000/config.json ...
	I0827 15:36:04.044991    5828 start.go:360] acquireMachinesLock for newest-cni-666000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:36:04.045018    5828 start.go:364] duration metric: took 21.791µs to acquireMachinesLock for "newest-cni-666000"
	I0827 15:36:04.045027    5828 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:36:04.045033    5828 fix.go:54] fixHost starting: 
	I0827 15:36:04.045161    5828 fix.go:112] recreateIfNeeded on newest-cni-666000: state=Stopped err=<nil>
	W0827 15:36:04.045170    5828 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:36:04.048281    5828 out.go:177] * Restarting existing qemu2 VM for "newest-cni-666000" ...
	I0827 15:36:04.056375    5828 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:36:04.056407    5828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:54:8c:30:15:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:36:04.058547    5828 main.go:141] libmachine: STDOUT: 
	I0827 15:36:04.058568    5828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:36:04.058594    5828 fix.go:56] duration metric: took 13.562792ms for fixHost
	I0827 15:36:04.058598    5828 start.go:83] releasing machines lock for "newest-cni-666000", held for 13.576625ms
	W0827 15:36:04.058606    5828 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:36:04.058643    5828 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:36:04.058648    5828 start.go:729] Will try again in 5 seconds ...
	I0827 15:36:09.060655    5828 start.go:360] acquireMachinesLock for newest-cni-666000: {Name:mka0a97fe84f2fee930c1c6ad2379337c089aa32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 15:36:09.061168    5828 start.go:364] duration metric: took 424.5µs to acquireMachinesLock for "newest-cni-666000"
	I0827 15:36:09.061312    5828 start.go:96] Skipping create...Using existing machine configuration
	I0827 15:36:09.061332    5828 fix.go:54] fixHost starting: 
	I0827 15:36:09.062112    5828 fix.go:112] recreateIfNeeded on newest-cni-666000: state=Stopped err=<nil>
	W0827 15:36:09.062139    5828 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 15:36:09.071480    5828 out.go:177] * Restarting existing qemu2 VM for "newest-cni-666000" ...
	I0827 15:36:09.075527    5828 qemu.go:418] Using hvf for hardware acceleration
	I0827 15:36:09.075713    5828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:54:8c:30:15:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19522-983/.minikube/machines/newest-cni-666000/disk.qcow2
	I0827 15:36:09.085875    5828 main.go:141] libmachine: STDOUT: 
	I0827 15:36:09.085934    5828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0827 15:36:09.086053    5828 fix.go:56] duration metric: took 24.687917ms for fixHost
	I0827 15:36:09.086066    5828 start.go:83] releasing machines lock for "newest-cni-666000", held for 24.877083ms
	W0827 15:36:09.086242    5828 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-666000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-666000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0827 15:36:09.094535    5828 out.go:201] 
	W0827 15:36:09.097650    5828 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0827 15:36:09.097673    5828 out.go:270] * 
	* 
	W0827 15:36:09.100248    5828 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 15:36:09.111451    5828 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-666000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000: exit status 7 (70.93425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-666000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-943000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (32.753417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-943000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-943000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-943000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.604167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-943000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-943000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (29.280834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-943000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (28.738625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-943000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-943000 --alsologtostderr -v=1: exit status 83 (40.019375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-943000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-943000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:36:07.103774    5849 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:36:07.103926    5849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:07.103929    5849 out.go:358] Setting ErrFile to fd 2...
	I0827 15:36:07.103931    5849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:07.104052    5849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:36:07.104280    5849 out.go:352] Setting JSON to false
	I0827 15:36:07.104288    5849 mustload.go:65] Loading cluster: default-k8s-diff-port-943000
	I0827 15:36:07.104487    5849 config.go:182] Loaded profile config "default-k8s-diff-port-943000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:36:07.109290    5849 out.go:177] * The control-plane node default-k8s-diff-port-943000 host is not running: state=Stopped
	I0827 15:36:07.113289    5849 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-943000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-943000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (29.110458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (29.523458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-943000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-666000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000: exit status 7 (30.143209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-666000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-666000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-666000 --alsologtostderr -v=1: exit status 83 (41.239583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-666000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-666000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 15:36:09.297179    5873 out.go:345] Setting OutFile to fd 1 ...
	I0827 15:36:09.297344    5873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:09.297347    5873 out.go:358] Setting ErrFile to fd 2...
	I0827 15:36:09.297350    5873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 15:36:09.297489    5873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 15:36:09.297709    5873 out.go:352] Setting JSON to false
	I0827 15:36:09.297717    5873 mustload.go:65] Loading cluster: newest-cni-666000
	I0827 15:36:09.297925    5873 config.go:182] Loaded profile config "newest-cni-666000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 15:36:09.301037    5873 out.go:177] * The control-plane node newest-cni-666000 host is not running: state=Stopped
	I0827 15:36:09.305042    5873 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-666000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-666000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000: exit status 7 (30.260583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-666000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000: exit status 7 (29.489208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-666000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 6.73
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.36
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 146.69
29 TestAddons/serial/Volcano 39.36
31 TestAddons/serial/GCPAuth/Namespaces 0.08
33 TestAddons/parallel/Registry 14.25
34 TestAddons/parallel/Ingress 17.64
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 6.25
39 TestAddons/parallel/CSI 54.44
40 TestAddons/parallel/Headlamp 16.66
41 TestAddons/parallel/CloudSpanner 5.2
42 TestAddons/parallel/LocalPath 9.58
43 TestAddons/parallel/NvidiaDevicePlugin 6.16
44 TestAddons/parallel/Yakd 11.24
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 11.03
56 TestErrorSpam/setup 36.88
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.69
60 TestErrorSpam/unpause 0.61
61 TestErrorSpam/stop 55.34
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.17
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.54
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.01
73 TestFunctional/serial/CacheCmd/cache/add_local 1.12
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.12
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.73
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 32.4
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.68
84 TestFunctional/serial/LogsFileCmd 0.6
85 TestFunctional/serial/InvalidService 4.61
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 10.1
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 25.89
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.41
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.38
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
111 TestFunctional/parallel/License 0.4
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.06
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.32
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.12
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 7.24
133 TestFunctional/parallel/MountCmd/specific-port 1.15
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.15
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.67
142 TestFunctional/parallel/ImageCommands/Setup 1.87
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.51
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.28
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.33
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
150 TestFunctional/parallel/DockerEnv/bash 0.27
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 203.02
161 TestMultiControlPlane/serial/DeployApp 8.09
162 TestMultiControlPlane/serial/PingHostFromPods 0.73
163 TestMultiControlPlane/serial/AddWorkerNode 55.94
164 TestMultiControlPlane/serial/NodeLabels 0.15
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
166 TestMultiControlPlane/serial/CopyFile 4.36
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.1
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 3.49
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
208 TestMainNoArgs 0.03
255 TestStoppedBinaryUpgrade/Setup 1.07
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
272 TestNoKubernetes/serial/ProfileList 31.4
273 TestNoKubernetes/serial/Stop 3.73
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
290 TestStartStop/group/old-k8s-version/serial/Stop 2.03
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
303 TestStartStop/group/no-preload/serial/Stop 2.01
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/embed-certs/serial/Stop 1.82
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.59
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
330 TestStartStop/group/newest-cni/serial/Stop 1.88
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-712000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-712000: exit status 85 (100.225709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-712000 | jenkins | v1.33.1 | 27 Aug 24 14:36 PDT |          |
	|         | -p download-only-712000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 14:36:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 14:36:31.487855    1465 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:36:31.487990    1465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:36:31.487993    1465 out.go:358] Setting ErrFile to fd 2...
	I0827 14:36:31.487996    1465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:36:31.488118    1465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	W0827 14:36:31.488213    1465 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19522-983/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19522-983/.minikube/config/config.json: no such file or directory
	I0827 14:36:31.489518    1465 out.go:352] Setting JSON to true
	I0827 14:36:31.507464    1465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":356,"bootTime":1724794235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 14:36:31.507527    1465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 14:36:31.512841    1465 out.go:97] [download-only-712000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 14:36:31.512956    1465 notify.go:220] Checking for updates...
	W0827 14:36:31.513019    1465 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball: no such file or directory
	I0827 14:36:31.515789    1465 out.go:169] MINIKUBE_LOCATION=19522
	I0827 14:36:31.519830    1465 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 14:36:31.524805    1465 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 14:36:31.527787    1465 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 14:36:31.530774    1465 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	W0827 14:36:31.536823    1465 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 14:36:31.537076    1465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 14:36:31.541782    1465 out.go:97] Using the qemu2 driver based on user configuration
	I0827 14:36:31.541801    1465 start.go:297] selected driver: qemu2
	I0827 14:36:31.541822    1465 start.go:901] validating driver "qemu2" against <nil>
	I0827 14:36:31.541899    1465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 14:36:31.544792    1465 out.go:169] Automatically selected the socket_vmnet network
	I0827 14:36:31.550577    1465 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0827 14:36:31.550673    1465 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 14:36:31.550758    1465 cni.go:84] Creating CNI manager for ""
	I0827 14:36:31.550777    1465 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0827 14:36:31.550828    1465 start.go:340] cluster config:
	{Name:download-only-712000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-712000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:36:31.556030    1465 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 14:36:31.560820    1465 out.go:97] Downloading VM boot image ...
	I0827 14:36:31.560849    1465 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/iso/arm64/minikube-v1.33.1-1724692311-19511-arm64.iso
	I0827 14:36:37.868979    1465 out.go:97] Starting "download-only-712000" primary control-plane node in "download-only-712000" cluster
	I0827 14:36:37.868997    1465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 14:36:37.932357    1465 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 14:36:37.932367    1465 cache.go:56] Caching tarball of preloaded images
	I0827 14:36:37.932539    1465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 14:36:37.937147    1465 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0827 14:36:37.937154    1465 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:38.024520    1465 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 14:36:48.637640    1465 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:48.637815    1465 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:49.331307    1465 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0827 14:36:49.331505    1465 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/download-only-712000/config.json ...
	I0827 14:36:49.331520    1465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/download-only-712000/config.json: {Name:mk14739c14f7bcda25e7b10d533a7a0346d39491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 14:36:49.331759    1465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 14:36:49.331975    1465 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0827 14:36:49.854688    1465 out.go:193] 
	W0827 14:36:49.860617    1465 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19522-983/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920 0x10734f920] Decompressors:map[bz2:0x1400065f900 gz:0x1400065f908 tar:0x1400065f8b0 tar.bz2:0x1400065f8c0 tar.gz:0x1400065f8d0 tar.xz:0x1400065f8e0 tar.zst:0x1400065f8f0 tbz2:0x1400065f8c0 tgz:0x1400065f8d0 txz:0x1400065f8e0 tzst:0x1400065f8f0 xz:0x1400065f910 zip:0x1400065f920 zst:0x1400065f918] Getters:map[file:0x14000634670 http:0x1400017c230 https:0x1400017c280] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0827 14:36:49.860645    1465 out_reason.go:110] 
	W0827 14:36:49.869462    1465 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 14:36:49.873542    1465 out.go:193] 
	
	
	* The control-plane node download-only-712000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-712000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-712000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (6.732417667s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-352000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-352000: exit status 85 (74.507833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-712000 | jenkins | v1.33.1 | 27 Aug 24 14:36 PDT |                     |
	|         | -p download-only-712000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 27 Aug 24 14:36 PDT | 27 Aug 24 14:36 PDT |
	| delete  | -p download-only-712000        | download-only-712000 | jenkins | v1.33.1 | 27 Aug 24 14:36 PDT | 27 Aug 24 14:36 PDT |
	| start   | -o=json --download-only        | download-only-352000 | jenkins | v1.33.1 | 27 Aug 24 14:36 PDT |                     |
	|         | -p download-only-352000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 14:36:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 14:36:50.292561    1495 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:36:50.292684    1495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:36:50.292688    1495 out.go:358] Setting ErrFile to fd 2...
	I0827 14:36:50.292690    1495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:36:50.292811    1495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 14:36:50.293869    1495 out.go:352] Setting JSON to true
	I0827 14:36:50.311734    1495 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":375,"bootTime":1724794235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 14:36:50.311798    1495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 14:36:50.316556    1495 out.go:97] [download-only-352000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 14:36:50.316637    1495 notify.go:220] Checking for updates...
	I0827 14:36:50.321020    1495 out.go:169] MINIKUBE_LOCATION=19522
	I0827 14:36:50.324647    1495 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 14:36:50.327615    1495 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 14:36:50.330552    1495 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 14:36:50.334599    1495 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	W0827 14:36:50.340538    1495 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 14:36:50.340703    1495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 14:36:50.343549    1495 out.go:97] Using the qemu2 driver based on user configuration
	I0827 14:36:50.343557    1495 start.go:297] selected driver: qemu2
	I0827 14:36:50.343560    1495 start.go:901] validating driver "qemu2" against <nil>
	I0827 14:36:50.343612    1495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 14:36:50.346505    1495 out.go:169] Automatically selected the socket_vmnet network
	I0827 14:36:50.352025    1495 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0827 14:36:50.352126    1495 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 14:36:50.352146    1495 cni.go:84] Creating CNI manager for ""
	I0827 14:36:50.352158    1495 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 14:36:50.352163    1495 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 14:36:50.352201    1495 start.go:340] cluster config:
	{Name:download-only-352000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:36:50.356112    1495 iso.go:125] acquiring lock: {Name:mkdf76980328fbbb833db68ffc6577b810326eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 14:36:50.359579    1495 out.go:97] Starting "download-only-352000" primary control-plane node in "download-only-352000" cluster
	I0827 14:36:50.359585    1495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 14:36:50.421298    1495 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 14:36:50.421331    1495 cache.go:56] Caching tarball of preloaded images
	I0827 14:36:50.421556    1495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 14:36:50.426552    1495 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0827 14:36:50.426560    1495 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:50.515212    1495 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 14:36:54.824904    1495 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 14:36:54.825054    1495 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19522-983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-352000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-352000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-352000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-823000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-823000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-657000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-657000: exit status 85 (58.605042ms)

                                                
                                                
-- stdout --
	* Profile "addons-657000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-657000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-657000: exit status 85 (71.516708ms)

                                                
                                                
-- stdout --
	* Profile "addons-657000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (146.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-657000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-657000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m26.689654375s)
--- PASS: TestAddons/Setup (146.69s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.425708ms
addons_test.go:905: volcano-admission stabilized in 7.57575ms
addons_test.go:897: volcano-scheduler stabilized in 7.6645ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-vsl8c" [28ef69d2-6280-4486-8d3d-b2d8b4d2909c] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004908041s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5mwqm" [a1355c07-1569-4444-a22b-a40a69390c68] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.009731625s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-6nqr6" [69501973-b3e5-4733-a316-6c7eb7df7042] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.008560667s
addons_test.go:932: (dbg) Run:  kubectl --context addons-657000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-657000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-657000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [72c687ad-91c7-4dfe-96b7-18900fc801ea] Pending
helpers_test.go:344: "test-job-nginx-0" [72c687ad-91c7-4dfe-96b7-18900fc801ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [72c687ad-91c7-4dfe-96b7-18900fc801ea] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005748459s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-657000 addons disable volcano --alsologtostderr -v=1: (10.09207525s)
--- PASS: TestAddons/serial/Volcano (39.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-657000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-657000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.271834ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-lx4cv" [a5dcc77d-6fcd-46d4-83ff-98d2292c9ed7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005955542s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n7bkx" [9dbdbd06-356e-4c8a-9c59-07febcb15387] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003874666s
addons_test.go:342: (dbg) Run:  kubectl --context addons-657000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-657000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-657000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.967541042s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 ip
2024/08/27 14:40:34 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-657000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-657000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-657000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [98e5ff40-54d1-4155-ad53-453876a4f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [98e5ff40-54d1-4155-ad53-453876a4f3ac] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00910325s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-657000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-657000 addons disable ingress --alsologtostderr -v=1: (7.246421834s)
--- PASS: TestAddons/parallel/Ingress (17.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hjmqc" [a39fe8d2-cec3-49af-9b6a-4dcb6e7edeff] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011634584s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-657000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-657000: (5.301529917s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.3465ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-cr6nf" [5318b4ff-4fe2-4c67-aaae-e57aaed0c754] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004049875s
addons_test.go:417: (dbg) Run:  kubectl --context addons-657000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.058209ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-657000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-657000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1a4ee3fb-be7c-4a87-be94-aab067b0ec52] Pending
helpers_test.go:344: "task-pv-pod" [1a4ee3fb-be7c-4a87-be94-aab067b0ec52] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1a4ee3fb-be7c-4a87-be94-aab067b0ec52] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.011706542s
addons_test.go:590: (dbg) Run:  kubectl --context addons-657000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-657000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-657000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-657000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-657000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-657000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-657000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9cb8f319-c406-4b3c-8ef1-52ec8df9c4b3] Pending
helpers_test.go:344: "task-pv-pod-restore" [9cb8f319-c406-4b3c-8ef1-52ec8df9c4b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9cb8f319-c406-4b3c-8ef1-52ec8df9c4b3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005826292s
addons_test.go:632: (dbg) Run:  kubectl --context addons-657000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-657000 delete pod task-pv-pod-restore: (1.200825667s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-657000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-657000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-657000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.112826334s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-657000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-cctbk" [735effc1-ba91-43fd-aee5-7d266366d3f9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-cctbk" [735effc1-ba91-43fd-aee5-7d266366d3f9] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004574s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-657000 addons disable headlamp --alsologtostderr -v=1: (5.240409625s)
--- PASS: TestAddons/parallel/Headlamp (16.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8vwtw" [c7fab1ea-0f41-4048-b469-9a1a4418b7d5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013015792s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-657000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-657000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-657000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c24f6b9e-d40f-4a38-ba8e-3fc78a02d539] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c24f6b9e-d40f-4a38-ba8e-3fc78a02d539] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c24f6b9e-d40f-4a38-ba8e-3fc78a02d539] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003865625s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-657000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 ssh "cat /opt/local-path-provisioner/pvc-84ca5d70-c742-4c5f-a519-daa5d98ff02d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-657000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-657000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6zpt2" [74a51439-a8b0-4135-ae47-01caef39cfcd] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005287208s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-657000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hh9cv" [7553590c-139c-4d19-9d0d-7501d9843798] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00690525s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-657000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-657000 addons disable yakd --alsologtostderr -v=1: (5.237257292s)
--- PASS: TestAddons/parallel/Yakd (11.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-657000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-657000: (12.214312041s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-657000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-657000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-657000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.03s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.03s)

                                                
                                    
x
+
TestErrorSpam/setup (36.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-428000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-428000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 --driver=qemu2 : (36.87769625s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (36.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (55.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 stop: (3.192086125s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 stop: (26.109729666s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-428000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-428000 stop: (26.034414458s)
--- PASS: TestErrorSpam/stop (55.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19522-983/.minikube/files/etc/test/nested/copy/1463/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-289000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-289000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.168494625s)
--- PASS: TestFunctional/serial/StartWithProxy (48.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-289000 --alsologtostderr -v=8
E0827 14:44:24.602936    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:24.611270    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:24.624675    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:24.647803    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:24.691151    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:24.774584    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:24.938109    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:25.261686    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:25.905449    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:27.189424    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:29.752950    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:34.876334    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:44:45.120083    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-289000 --alsologtostderr -v=8: (38.540642416s)
functional_test.go:663: soft start took 38.541143875s for "functional-289000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-289000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-289000 cache add registry.k8s.io/pause:3.1: (1.921417s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-289000 cache add registry.k8s.io/pause:3.3: (1.799754625s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-289000 cache add registry.k8s.io/pause:latest: (1.290769291s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2231568151/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cache add minikube-local-cache-test:functional-289000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cache delete minikube-local-cache-test:functional-289000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-289000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.153833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 kubectl -- --context functional-289000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-289000 get pods
E0827 14:45:05.603546    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:741: (dbg) Done: out/kubectl --context functional-289000 get pods: (1.009174917s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-289000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-289000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.39886s)
functional_test.go:761: restart took 32.398960167s for "functional-289000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-289000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd564784057/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-289000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-289000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-289000: exit status 115 (148.217958ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31875 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-289000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-289000 delete -f testdata/invalidsvc.yaml: (1.361100709s)
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 config get cpus: exit status 14 (34.746ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 config get cpus: exit status 14 (31.386125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-289000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-289000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2102: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-289000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-289000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.828375ms)

                                                
                                                
-- stdout --
	* [functional-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 14:46:28.286405    2089 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:46:28.286550    2089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:46:28.286553    2089 out.go:358] Setting ErrFile to fd 2...
	I0827 14:46:28.286555    2089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:46:28.286677    2089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 14:46:28.287731    2089 out.go:352] Setting JSON to false
	I0827 14:46:28.304005    2089 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":953,"bootTime":1724794235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 14:46:28.304074    2089 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 14:46:28.308198    2089 out.go:177] * [functional-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0827 14:46:28.315202    2089 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 14:46:28.315241    2089 notify.go:220] Checking for updates...
	I0827 14:46:28.323162    2089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 14:46:28.327174    2089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 14:46:28.328428    2089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 14:46:28.331179    2089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 14:46:28.334282    2089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 14:46:28.337445    2089 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 14:46:28.337696    2089 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 14:46:28.342129    2089 out.go:177] * Using the qemu2 driver based on existing profile
	I0827 14:46:28.349196    2089 start.go:297] selected driver: qemu2
	I0827 14:46:28.349205    2089 start.go:901] validating driver "qemu2" against &{Name:functional-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:46:28.349292    2089 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 14:46:28.356244    2089 out.go:201] 
	W0827 14:46:28.360147    2089 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0827 14:46:28.364147    2089 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-289000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-289000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-289000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.627458ms)

                                                
                                                
-- stdout --
	* [functional-289000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 14:46:28.160495    2085 out.go:345] Setting OutFile to fd 1 ...
	I0827 14:46:28.160618    2085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:46:28.160624    2085 out.go:358] Setting ErrFile to fd 2...
	I0827 14:46:28.160626    2085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 14:46:28.160760    2085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
	I0827 14:46:28.162223    2085 out.go:352] Setting JSON to false
	I0827 14:46:28.179498    2085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":953,"bootTime":1724794235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0827 14:46:28.179594    2085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0827 14:46:28.185299    2085 out.go:177] * [functional-289000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0827 14:46:28.194251    2085 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 14:46:28.194312    2085 notify.go:220] Checking for updates...
	I0827 14:46:28.202117    2085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	I0827 14:46:28.206138    2085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0827 14:46:28.209139    2085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 14:46:28.212211    2085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	I0827 14:46:28.218282    2085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 14:46:28.221528    2085 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 14:46:28.221810    2085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 14:46:28.226193    2085 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0827 14:46:28.233183    2085 start.go:297] selected driver: qemu2
	I0827 14:46:28.233192    2085 start.go:901] validating driver "qemu2" against &{Name:functional-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 14:46:28.233264    2085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 14:46:28.240156    2085 out.go:201] 
	W0827 14:46:28.244126    2085 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0827 14:46:28.248131    2085 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5ee1fe75-6f2e-4d7e-9223-e12b18f2a494] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009643291s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-289000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-289000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-289000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-289000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b7c3429b-47c1-4430-b5a6-2c71313e2ef2] Pending
helpers_test.go:344: "sp-pod" [b7c3429b-47c1-4430-b5a6-2c71313e2ef2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b7c3429b-47c1-4430-b5a6-2c71313e2ef2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005692875s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-289000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-289000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-289000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9012552-a6d2-4ac0-b4b0-637838e84f49] Pending
helpers_test.go:344: "sp-pod" [d9012552-a6d2-4ac0-b4b0-637838e84f49] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d9012552-a6d2-4ac0-b4b0-637838e84f49] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.007841125s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-289000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh -n functional-289000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cp functional-289000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1849208928/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh -n functional-289000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh -n functional-289000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1463/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /etc/test/nested/copy/1463/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1463.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /etc/ssl/certs/1463.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1463.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /usr/share/ca-certificates/1463.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /etc/ssl/certs/14632.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /usr/share/ca-certificates/14632.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-289000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "sudo systemctl is-active crio": exit status 1 (88.075875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-289000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-289000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-289000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1947: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-289000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-289000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-289000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [96170916-9fcf-4dee-a52f-2e0c29cb150a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0827 14:45:46.566600    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [96170916-9fcf-4dee-a52f-2e0c29cb150a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003339625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-289000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.157.183 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-289000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-289000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-289000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-bt96n" [f0005e4a-7fb0-4908-ad12-5a2f58999d3c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-bt96n" [f0005e4a-7fb0-4908-ad12-5a2f58999d3c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010236458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 service list -o json
functional_test.go:1494: Took "280.654458ms" to run "out/minikube-darwin-arm64 -p functional-289000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30959
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30959
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "86.643958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.609167ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "84.268666ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.66725ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port639797737/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724795178081810000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port639797737/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724795178081810000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port639797737/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724795178081810000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port639797737/001/test-1724795178081810000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.275833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (79.629709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 27 21:46 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 27 21:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 27 21:46 test-1724795178081810000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh cat /mount-9p/test-1724795178081810000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-289000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [97763ca7-64c9-4089-9b20-7460752f064f] Pending
helpers_test.go:344: "busybox-mount" [97763ca7-64c9-4089-9b20-7460752f064f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [97763ca7-64c9-4089-9b20-7460752f064f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [97763ca7-64c9-4089-9b20-7460752f064f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.010055167s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-289000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port639797737/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1607683484/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.367791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1607683484/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "sudo umount -f /mount-9p": exit status 1 (60.308042ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-289000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1607683484/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount1: exit status 1 (77.12925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount1: exit status 1 (91.125375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-289000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-289000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4210654987/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-289000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-289000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-289000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-289000 image ls --format short --alsologtostderr:
I0827 14:46:38.118919    2232 out.go:345] Setting OutFile to fd 1 ...
I0827 14:46:38.119078    2232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.119082    2232 out.go:358] Setting ErrFile to fd 2...
I0827 14:46:38.119084    2232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.119225    2232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 14:46:38.119729    2232 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.119792    2232 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.120600    2232 ssh_runner.go:195] Run: systemctl --version
I0827 14:46:38.120607    2232 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/functional-289000/id_rsa Username:docker}
I0827 14:46:38.145417    2232 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-289000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-289000 | ce2d2cda2d858 | 4.78MB |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-289000 | cbcc947e65f4f | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-289000 image ls --format table --alsologtostderr:
I0827 14:46:38.760979    2243 out.go:345] Setting OutFile to fd 1 ...
I0827 14:46:38.761134    2243 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.761137    2243 out.go:358] Setting ErrFile to fd 2...
I0827 14:46:38.761139    2243 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.761289    2243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 14:46:38.761748    2243 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.761805    2243 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.762608    2243 ssh_runner.go:195] Run: systemctl --version
I0827 14:46:38.762616    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/functional-289000/id_rsa Username:docker}
I0827 14:46:38.786083    2243 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-289000 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfe
be6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"cbcc947e65f4fc1d9fafcb26a3a30ba11dd6a4c88d6aad38e41cd3f129b04efd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-289000"],"size":"30"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3c
b4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-289000"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"
size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-289000 image ls --format json --alsologtostderr:
I0827 14:46:38.693016    2241 out.go:345] Setting OutFile to fd 1 ...
I0827 14:46:38.693139    2241 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.693143    2241 out.go:358] Setting ErrFile to fd 2...
I0827 14:46:38.693145    2241 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.693278    2241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 14:46:38.693677    2241 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.693743    2241 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.694541    2241 ssh_runner.go:195] Run: systemctl --version
I0827 14:46:38.694550    2241 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/functional-289000/id_rsa Username:docker}
I0827 14:46:38.719913    2241 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-289000 image ls --format yaml --alsologtostderr:
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-289000
size: "4780000"
- id: cbcc947e65f4fc1d9fafcb26a3a30ba11dd6a4c88d6aad38e41cd3f129b04efd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-289000
size: "30"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-289000 image ls --format yaml --alsologtostderr:
I0827 14:46:38.620633    2239 out.go:345] Setting OutFile to fd 1 ...
I0827 14:46:38.620790    2239 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.620794    2239 out.go:358] Setting ErrFile to fd 2...
I0827 14:46:38.620796    2239 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.620935    2239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 14:46:38.621380    2239 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.621448    2239 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.622281    2239 ssh_runner.go:195] Run: systemctl --version
I0827 14:46:38.622290    2239 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/functional-289000/id_rsa Username:docker}
I0827 14:46:38.648127    2239 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-289000 ssh pgrep buildkitd: exit status 1 (57.77175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image build -t localhost/my-image:functional-289000 testdata/build --alsologtostderr
2024/08/27 14:46:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-289000 image build -t localhost/my-image:functional-289000 testdata/build --alsologtostderr: (1.540489334s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-289000 image build -t localhost/my-image:functional-289000 testdata/build --alsologtostderr:
I0827 14:46:38.244159    2236 out.go:345] Setting OutFile to fd 1 ...
I0827 14:46:38.244387    2236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.244393    2236 out.go:358] Setting ErrFile to fd 2...
I0827 14:46:38.244396    2236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 14:46:38.244539    2236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19522-983/.minikube/bin
I0827 14:46:38.244967    2236 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.245784    2236 config.go:182] Loaded profile config "functional-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 14:46:38.246623    2236 ssh_runner.go:195] Run: systemctl --version
I0827 14:46:38.246630    2236 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19522-983/.minikube/machines/functional-289000/id_rsa Username:docker}
I0827 14:46:38.270095    2236 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3817060478.tar
I0827 14:46:38.270153    2236 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0827 14:46:38.274040    2236 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3817060478.tar
I0827 14:46:38.275685    2236 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3817060478.tar: stat -c "%s %y" /var/lib/minikube/build/build.3817060478.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3817060478.tar': No such file or directory
I0827 14:46:38.275697    2236 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3817060478.tar --> /var/lib/minikube/build/build.3817060478.tar (3072 bytes)
I0827 14:46:38.284183    2236 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3817060478
I0827 14:46:38.287627    2236 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3817060478 -xf /var/lib/minikube/build/build.3817060478.tar
I0827 14:46:38.290762    2236 docker.go:360] Building image: /var/lib/minikube/build/build.3817060478
I0827 14:46:38.290805    2236 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-289000 /var/lib/minikube/build/build.3817060478
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:1efaeb5ff3599e8762189716be9bfdddeaed7d42ae64bff2a8c158432bbf565f done
#8 naming to localhost/my-image:functional-289000 done
#8 DONE 0.0s
I0827 14:46:39.695848    2236 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-289000 /var/lib/minikube/build/build.3817060478: (1.404852042s)
I0827 14:46:39.695911    2236 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3817060478
I0827 14:46:39.699825    2236 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3817060478.tar
I0827 14:46:39.703162    2236 build_images.go:217] Built localhost/my-image:functional-289000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3817060478.tar
I0827 14:46:39.703184    2236 build_images.go:133] succeeded building to: functional-289000
I0827 14:46:39.703187    2236 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.849965208s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-289000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image load --daemon kicbase/echo-server:functional-289000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image load --daemon kicbase/echo-server:functional-289000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-289000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image load --daemon kicbase/echo-server:functional-289000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image save kicbase/echo-server:functional-289000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image rm kicbase/echo-server:functional-289000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-289000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 image save --daemon kicbase/echo-server:functional-289000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-289000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-289000 docker-env) && out/minikube-darwin-arm64 status -p functional-289000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-289000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-289000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-289000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-289000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-289000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-615000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0827 14:47:08.516911    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:49:24.625853    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:49:52.358841    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/addons-657000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-615000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m22.825522875s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-615000 -- rollout status deployment/busybox: (6.599208583s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9nhcb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9z2zg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-pvt5r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9nhcb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9z2zg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-pvt5r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9nhcb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9z2zg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-pvt5r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9nhcb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9nhcb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9z2zg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-9z2zg -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-pvt5r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-615000 -- exec busybox-7dff88458-pvt5r -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-615000 -v=7 --alsologtostderr
E0827 14:50:44.839010    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:44.846080    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:44.859438    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:44.882809    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:44.924633    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:45.007899    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:45.169438    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:45.492820    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:46.135842    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:47.419227    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:49.982636    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:50:55.106002    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 14:51:05.348679    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-615000 -v=7 --alsologtostderr: (55.710918459s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-615000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp testdata/cp-test.txt ha-615000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2775189844/001/cp-test_ha-615000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000:/home/docker/cp-test.txt ha-615000-m02:/home/docker/cp-test_ha-615000_ha-615000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test_ha-615000_ha-615000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000:/home/docker/cp-test.txt ha-615000-m03:/home/docker/cp-test_ha-615000_ha-615000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test_ha-615000_ha-615000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000:/home/docker/cp-test.txt ha-615000-m04:/home/docker/cp-test_ha-615000_ha-615000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test_ha-615000_ha-615000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp testdata/cp-test.txt ha-615000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2775189844/001/cp-test_ha-615000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m02:/home/docker/cp-test.txt ha-615000:/home/docker/cp-test_ha-615000-m02_ha-615000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test_ha-615000-m02_ha-615000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m02:/home/docker/cp-test.txt ha-615000-m03:/home/docker/cp-test_ha-615000-m02_ha-615000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test_ha-615000-m02_ha-615000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m02:/home/docker/cp-test.txt ha-615000-m04:/home/docker/cp-test_ha-615000-m02_ha-615000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test_ha-615000-m02_ha-615000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp testdata/cp-test.txt ha-615000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2775189844/001/cp-test_ha-615000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m03:/home/docker/cp-test.txt ha-615000:/home/docker/cp-test_ha-615000-m03_ha-615000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test_ha-615000-m03_ha-615000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m03:/home/docker/cp-test.txt ha-615000-m02:/home/docker/cp-test_ha-615000-m03_ha-615000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test_ha-615000-m03_ha-615000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m03:/home/docker/cp-test.txt ha-615000-m04:/home/docker/cp-test_ha-615000-m03_ha-615000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test_ha-615000-m03_ha-615000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp testdata/cp-test.txt ha-615000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2775189844/001/cp-test_ha-615000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m04:/home/docker/cp-test.txt ha-615000:/home/docker/cp-test_ha-615000-m04_ha-615000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000 "sudo cat /home/docker/cp-test_ha-615000-m04_ha-615000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m04:/home/docker/cp-test.txt ha-615000-m02:/home/docker/cp-test_ha-615000-m04_ha-615000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m02 "sudo cat /home/docker/cp-test_ha-615000-m04_ha-615000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 cp ha-615000-m04:/home/docker/cp-test.txt ha-615000-m03:/home/docker/cp-test_ha-615000-m04_ha-615000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-615000 ssh -n ha-615000-m03 "sudo cat /home/docker/cp-test_ha-615000-m04_ha-615000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0827 15:05:44.828615    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
E0827 15:07:07.911517    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.09626625s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.49s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-598000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-598000 --output=json --user=testUser: (3.491132875s)
--- PASS: TestJSONOutput/stop/Command (3.49s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-388000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-388000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.799167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"df242665-a750-4438-8bf1-cd5ccf18fb3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-388000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e00baa3-9da3-4e1e-95bf-b76716690817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"003bb79f-2dbb-4071-b6f7-dea94bd4b4dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig"}}
	{"specversion":"1.0","id":"f81c7661-99dc-442d-a4f5-935234872349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ae9cae87-b83b-4c98-8f04-cfddfbb3faed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b653125d-c41c-4eb6-b11a-b88ecac3ea8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube"}}
	{"specversion":"1.0","id":"7a475f1a-9ceb-4491-a981-a8e2b7960e3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9d1de868-1a51-4947-a21f-2d819b8582ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-388000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-388000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-070000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.596167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-070000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19522-983/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19522-983/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-070000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-070000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.4885ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-070000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.669177167s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.726129375s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-070000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-070000: (3.727969625s)
--- PASS: TestNoKubernetes/serial/Stop (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-070000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-070000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.676708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-070000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-070000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-443000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-615000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-615000 --alsologtostderr -v=3: (2.029266459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-615000 -n old-k8s-version-615000: exit status 7 (55.5765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-615000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-908000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-908000 --alsologtostderr -v=3: (2.008228167s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-908000 -n no-preload-908000: exit status 7 (55.839042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-908000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-066000 --alsologtostderr -v=3
E0827 15:35:44.709013    1463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19522-983/.minikube/profiles/functional-289000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-066000 --alsologtostderr -v=3: (1.823767542s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-066000 -n embed-certs-066000: exit status 7 (55.481833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-066000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-943000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-943000 --alsologtostderr -v=3: (3.589841041s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-943000 -n default-k8s-diff-port-943000: exit status 7 (56.702542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-943000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-666000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-666000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-666000 --alsologtostderr -v=3: (1.883773625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-666000 -n newest-cni-666000: exit status 7 (55.302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-666000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-554000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-554000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-554000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-554000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-554000"

                                                
                                                
----------------------- debugLogs end: cilium-554000 [took: 2.203093125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-554000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-554000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-924000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-924000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard